14 resultados para design space exploration

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Pulsed electroacoustic (PEA) method is a commonly used non-destructive technique for investigating space charges. It has been developed since early 1980s. These days there is continuing interest for better understanding of the influence of space charge on the reliability of solid electrical insulation under high electric field. The PEA method is widely used for space charge profiling for its robust and relatively inexpensive features. The PEA technique relies on a voltage impulse used to temporarily disturb the space charge equilibrium in a dielectric. The acoustic wave is generated by charge movement in the sample and detected by means of a piezoelectric film. The spatial distribution of the space charge is contained within the detected signal. The principle of such a system is already well established, and several kinds of setups have been constructed for different measurement needs. This thesis presents the design of a PEA measurement system as a systems engineering project. The operating principle and some recent developments are summarised. The steps of electrical and mechanical design of the instrument are discussed. A common procedure for measuring space charges is explained and applied to verify the functionality of the system. The measurement system is provided as an additional basic research tool for the Corporate Research Centre of ABB (China) Ltd. It can be used to characterise flat samples with thickness of 0.2–0.5 mm under DC stress. The spatial resolution of the measurement is 20 μm.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Multiprocessing is a promising solution to meet the requirements of near future applications. To get full benefit from parallel processing, a manycore system needs efficient, on-chip communication architecture. Networkon- Chip (NoC) is a general purpose communication concept that offers highthroughput, reduced power consumption, and keeps complexity in check by a regular composition of basic building blocks. This thesis presents power efficient communication approaches for networked many-core systems. We address a range of issues being important for designing power-efficient manycore systems at two different levels: the network-level and the router-level. From the network-level point of view, exploiting state-of-the-art concepts such as Globally Asynchronous Locally Synchronous (GALS), Voltage/ Frequency Island (VFI), and 3D Networks-on-Chip approaches may be a solution to the excessive power consumption demanded by today’s and future many-core systems. To this end, a low-cost 3D NoC architecture, based on high-speed GALS-based vertical channels, is proposed to mitigate high peak temperatures, power densities, and area footprints of vertical interconnects in 3D ICs. To further exploit the beneficial feature of a negligible inter-layer distance of 3D ICs, we propose a novel hybridization scheme for inter-layer communication. In addition, an efficient adaptive routing algorithm is presented which enables congestion-aware and reliable communication for the hybridized NoC architecture. An integrated monitoring and management platform on top of this architecture is also developed in order to implement more scalable power optimization techniques. From the router-level perspective, four design styles for implementing power-efficient reconfigurable interfaces in VFI-based NoC systems are proposed. To enhance the utilization of virtual channel buffers and to manage their power consumption, a partial virtual channel sharing method for NoC routers is devised and implemented. Extensive experiments with synthetic and real benchmarks show significant power savings and mitigated hotspots with similar performance compared to latest NoC architectures. The thesis concludes that careful codesigned elements from different network levels enable considerable power savings for many-core systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tämän insinöörityön tarkoituksena on kehittää avaruusinstrumentti, joka on osa Euroopan avaruusjärjestön ESA:n ja Japanin avaruusjärjestön JAXA:n BepiColombo-yhteistyöhanketta. Satelliitti lähetetään Merkurius-planeetan kiertoradalle vuonna 2013. Avaruusaluksen matka Merkuriukseen kestää yhteensä kuusi vuotta ja on perillä vuonna 2019. Yksi BepiColombo-satelliitin tieteellisistä instrumenteista on Oxford Instruments Analytical Oy:n kehittämä SIXS-instrumentti (Solar Intensity X-ray and particle Spectrometer). Instrumentin tarkoituksena on mitata auringosta tulevaa röntgen- ja partikkelisäteilyä. Se toimii yhteistyössä Merkuriuksen pintaa mittaavan MIXS-instrumentin (Mercury Imaging X-ray Spectrometer) kanssa. Tuloksista pystytään analysoimaan ne alkuaineet, joista Merkuriuksen pinta koostuu. Työn alussa esitellään teoriataustaa alkuaineiden mittauksesta niiltä osin, kuin se tämän työn kannalta on tarpeellista. Työssä syvennytään tarkemmin auringosta tulevan säteilyn mittauksesta vastaavan instrumentin tekniikkaan ja mekaniikkasuunnitteluun. Instrumentin lämpöteknisestä suunnittelusta, värähtelymittauksista ja lujuusanalyysista on työhön sisällytetty pääasiat. Työn tuloksena on kehitetty instrumenttiin tulevan partikkelidetektorin prototyyppi sekä instrumenttikotelon malli. Lopullisen koon instrumenttikotelolle määrittää vaadittavan elektroniikan viemä tila. Mittalaitteen kehitystyö jatkuu Oxford Instruments Analytical Oy:ssä vuoteen 2011 saakka.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the latest few years the need for new motor types has grown, since both high efficiency and an accurate dynamic performance are demanded in industrial applications. For this reason, new effective control systems such as direct torque control (DTC) have been developed. Permanent magnet synchronous motors (PMSM) are well suitable for new adjustable speed AC inverter drives, because their efficiency and power factor are not depending on the pole pair number and speed to the same extent as it is the case in induction motors. Therefore, an induction motor (IM) with a mechanical gearbox can often be replaced with a direct PM motor drive. Space as well as costs will be saved, because the efficiency increases and the cost of maintenance decreases as well. This thesis deals with design criterion, analytical calculation and analysis of the permanent magnet synchronous motor for both sinusoidal air-gap flux density and rectangular air-gapflux density. It is examined how the air-gap flux, flux densities, inductances and torque can be estimated analytically for salient pole and non-salient pole motors. It has been sought by means of analytical calculations for the ultimate construction for machines rotating at relative low 300 rpm to 600 rpm speeds, which are suitable speeds e.g. in Pulp&Paper industry. The calculations are verified by using Finite Element calculations and by measuring of prototype motor. The prototype motor is a 45 kW, 600 rpm PMSM with buried V-magnets, which is a very appropriate construction for high torque motors with a high performance. With the purposebuilt prototype machine it is possible not only to verify the analytical calculations but also to show whether the 600 rpm PMSM can replace the 1500 rpm IM with a gear. It can also be tested if the outer dimensions of the PMSM may be the same as for the IM and if the PMSM in this case can produce a 2.5 fold torque, in consequence of which it may be possible to achieve the same power. The thesis also considers the question how to design a permanent magnet synchronous motor for relatively low speed applications that require a high motor torqueand efficiency as well as bearable costs of permanent magnet materials. It is shown how a selection of different parameters affects the motor properties. Key words: Permanent magnet synchronous motor, PMSM, surface magnets, buried magnets

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study combines several projects related to the flows in vessels with complex shapes representing different chemical apparata. Three major cases were studied. The first one is a two-phase plate reactor with a complex structure of intersecting micro channels engraved on one plate which is covered by another plain plate. The second case is a tubular microreactor, consisting of two subcases. The first subcase is a multi-channel two-component commercial micromixer (slit interdigital) used to mix two liquid reagents before they enter the reactor. The second subcase is a micro-tube, where the distribution of the heat generated by the reaction was studied. The third case is a conventionally packed column. However, flow, reactions or mass transfer were not modeled. Instead, the research focused on how to describe mathematically the realistic geometry of the column packing, which is rather random and can not be created using conventional computeraided design or engineering (CAD/CAE) methods. Several modeling approaches were used to describe the performance of the processes in the considered vessels. Computational fluid dynamics (CFD) was used to describe the details of the flow in the plate microreactor and micromixer. A space-averaged mass transfer model based on Fick’s law was used to describe the exchange of the species through the gas-liquid interface in the microreactor. This model utilized data, namely the values of the interfacial area, obtained by the corresponding CFD model. A common heat transfer model was used to find the heat distribution in the micro-tube. To generate the column packing, an additional multibody dynamic model was implemented. Auxiliary simulation was carried out to determine the position and orientation of every packing element in the column. This data was then exported into a CAD system to generate desirable geometry, which could further be used for CFD simulations. The results demonstrated that the CFD model of the microreactor could predict the flow pattern well enough and agreed with experiments. The mass transfer model allowed to estimate the mass transfer coefficient. Modeling for the second case showed that the flow in the micromixer and the heat transfer in the tube could be excluded from the larger model which describes the chemical kinetics in the reactor. Results of the third case demonstrated that the auxiliary simulation could successfully generate complex random packing not only for the column but also for other similar cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pro gradu –tutkielman tavoitteena oli tutkia, kuinka palvelumuotoilua voidaan käyttää parempien asiakaskokemusten luomisessa. Tutkimuksen teoreettisessa osuudessa keskityttiin tutkimaan palvelumuotoilun ja asiakaskokemuksen käsitteitä. Tutkielman empiirinen osuus tehtiin laadullisena tutkimuksena, jossa vertailtiin asiakaskokemuksia kahdessa Espoon kaupungin yhteispalvelupisteessä: Leppävaarassa sijaitseva yhteispalvelupiste oli hiljattain suunniteltu uudelleen käyttäen palvelumuotoilun periaatteita; Matinkylän palvelupiste oli alkuperäisessä muodossaan tutkimuksen aikana. Tutkimuksen yhtenä tavoitteena oli myös selvittää, oliko palvelumuotoilun menetelmin tehty muotoiluprojekti Leppävaaran yhteispalvelupisteessä onnistunut, kun sitä arvioitiin asiakaskokemuksen näkökulmasta. Tutkielman aineisto kerättiin suoran havainnoinnin ja haastattelun keinoin. Yhteensä 33 yksittäistä asiakasta havannoitiin ja haastateltiin tutkimusta varten toukokuussa 2015. Vastaajat valikoituivat satunnaisesti havannointipäivien asiakkaista. Heitä havainnoitiin koko asiakaspolun ajan minkä jälkeen heitä haastateltiin. Tutkimuksen tulokset ovat kaksiosaiset. 1) Arvioitaessa asiakkaiden kokemuksia liittyen palvelutilan toimivuuteen ja aineelliseen ympäristöön todettiin, että palvelumuotoilulla saavuttettiin parempi asiakaskokemus, ja täten Leppävaaran muotoiluprojekti oli onnistunut tavoitteissaan. 2) Kun taas tuloksia tarkasteltiin asiakaspalvelutilanteiden näkökulmasta, projekti ei ollut päässyt tavoitteisiin ja palvelumuotoilun menetelmillä ei pystytty parantamaan asiakaskokemuksta. Espoon kaupungin muotoiluprojekti on vielä kesken, tulosten perusteella tutkija ehdotti jatkotoimenpiteenä muun muassa lisäkoulutusta palveluhenkilökunnalle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation centres on the themes of knowledge creation, interdisciplinarity and knowledge work. My research approaches interdisciplinary knowledge creation (IKC) as practical situated activity. I argue that by approaching IKC from the practice-based perspective makes it possible to “deconstruct” how knowledge creation actually happens, and demystify its strong intellectual, mentalistic and expertise-based connotations. I have rendered the work of the observed knowledge workers into something ordinary, accessible and routinized. Consequently this has made it possible to grasp the pragmatic challenges as well the concrete drivers of such activity. Thus the effective way of organizing such activities becomes a question of organizing and leading effective everyday practices. To achieve that end, I have conducted ethnographic research of one explicitly interdisciplinary space within higher education, Aalto Design Factory in Helsinki, Finland, where I observed how students from different disciplines collaborated in new product development projects. I argue that IKC is a multi-dimensional construct that intertwines a particular way of doing; a way of experiencing; a way of embodied being; and a way of reflecting on the very doing itself. This places emphasis not only the practices themselves, but also on the way the individual experiences the practices, as this directly affects how the individual practices. My findings suggest that in order to effectively organize and execute knowledge creation activities organizations need to better accept and manage the emergent diversity and complexity inherent in such activities. In order to accomplish this, I highlight the importance of understanding and using a variety of (material) objects, the centrality of mundane everyday practices, the acceptance of contradictions and negotiations well as the role of management that is involved and engaged. To succeed in interdisciplinary knowledge creation is to lead not only by example, but also by being very much present in the very everyday practices that make it happen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health Innovation Village at GE is one of the new communities targeted for startup and growth-oriented companies. It has been established at the premises of a multinational conglomerate that will promote networking and growth of startup companies. The concept combines features from traditional business incubators, accelerators, and coworking spaces. This research compares Health Innovation Village to these concepts regarding its goals, target clients, source of income, organization, facilities, management, and success factors. In addition, a new incubator classification model is introduced. On the other hand, Health Innovation Village is examined from its tenants’ perspective and improvements are suggested. The work was implemented as a qualitative case study by interviewing GE staff with connections to Health Innovation Village as well as startup entrepreneurs and employees’ working there. The most evident features of Health Innovation Village correspond to those of business incubators although it is atypical as a non-profit corporate business incubator. Strong network orientation and connections to venture capitalists are common characteristics of these new types of accelerators. The design of the premises conforms to the principles of coworking spaces, but the services provided to the startup companies are considerably more versatile than the services offered by coworking spaces. The advantages of Health Innovation Village are that there are first-class premises and exceptionally good networking possibilities that other types of incubators or accelerators are not able to offer. A conglomerate can also provide multifaceted special knowledge for young firms. In addition, both GE and the startups gained considerable publicity through their cooperation, indeed a characteristic that benefits both parties. Most of the expectations of the entrepreneurs were exceeded. However, communication and the scope of cooperation remain challenges. Micro companies spend their time developing and marketing their products and acquiring financing. Therefore, communication should be as clear as possible and accessible everywhere. The startups would prefer to cooperate significantly more, but few have the time available to assume the responsibility of leadership. The entrepreneurs also expected to have more possibilities for cooperation with GE. Wider collaboration might be accomplished by curation in the same way as it is used in the well-functioning coworking spaces where curators take care of practicalities and promote cooperation. Communication issues could be alleviated if the community had its own Intranet pages where all information could be concentrated. In particular, a common calendar and a room reservation system could be useful. In addition, it could be beneficial to have a section of the Intranet open for both the GE staff and the startups so that those willing to share their knowledge and those having project offers could use it for advertising.