18 resultados para Architecture and Complexity
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
In this thesis I examine Service Oriented Architecture (SOA) considering both its positive and negative qualities for business organizations and IT. In SOA, services are loosely coupled and invoked through standard interfaces to enable business process independence from the underlying technology. As an architecture, SOA brings the key benefit of service reuse that may mean anything from simple application reuse to taking advantage of entire business processes across enterprises. SOA also promises interoperability especially by the Web services standards that enable platform independency. Cost efficiency is mainly a result of the savings in IT maintenance and reduced development costs. The most severe limitations of SOA are performance implications and security issues, but the applicability of SOA is also limited. Additional disadvantages of a service oriented approach include problems in data management and complexity questions, and the lack of agreement about SOA and its twofold nature as a business as well as technology approach leads to problematic interpretation of the available information. In this thesis I find the benefits and limitations of SOA for the purpose described above and propose that companies need to consider the decision to implement SOA carefully to determine whether the benefits will outdo the costs in the individual case.
Resumo:
The size and complexity of projects in the software development are growing very fast. At the same time, the proportion of successful projects is still quite low according to the previous research. Although almost every project's team knows main areas of responsibility which would help to finish project on time and on budget, this knowledge is rarely used in practice. So it is important to evaluate the success of existing software development projects and to suggest a method for evaluating success chances which can be used in the software development projects. The main aim of this study is to evaluate the success of projects in the selected geographical region (Russia-Ukraine-Belarus). The second aim is to compare existing models of success prediction and to determine their strengths and weaknesses. Research was done as an empirical study. A survey with structured forms and theme-based interviews were used as the data collection methods. The information gathering was done in two stages. At the first stage, project manager or someone with similar responsibilities answered the questions over Internet. At the second stage, the participant was interviewed; his or her answers were discussed and refined. It made possible to get accurate information about each project and to avoid errors. It was found out that there are many problems in the software development projects. These problems are widely known and were discussed in literature many times. The research showed that most of the projects have problems with schedule, requirements, architecture, quality, and budget. Comparison of two models of success prediction presented that The Standish Group overestimates problems in project. At the same time, McConnell's model can help to identify problems in time and avoid troubles in future. A framework for evaluating success chances in distributed projects was suggested. The framework is similar to The Standish Group model but it was customized for distributed projects.
Resumo:
The aim of this dissertation is to bridge and synthesize the different streams of literature addressing ecosystem architecture through a multiple‐lens perspective. In addition, the structural properties of and processes to design and manage the architecture will be examined. With this approach, the oft‐neglected actor‐structure duality is addressed and both the position and structure, and action and process are under scrutiny. Further, the developed framework and empirical evidence offer valuable insights on how firms collectively create value and individually appropriate value. The dissertation is divided into two parts. The first part comprises a literature review, as well as the conclusions of the whole study, and the second part includes six research publications. The dissertation is based on three different reasoning logics: abduction, induction and deduction; related qualitative and quantitative methodologies are utilized in the empirical examination of the phenomenon in the information and communication technology industry. The results suggest firstly that there are endogenous and exogenous structural properties of the ecosystem architecture. Out of these, the former ones can be more easily influenced by a particular actor whereas the latter ones are taken more or less for granted. Secondly, the exogenous ecosystem design properties influence the value creation potential of the ecosystem whereas the endogenous ecosystem design properties influence the value appropriation potential of a particular actor in the ecosystem. Thirdly, the study suggests that there is a relationship between endogenous and exogenous structural properties in that the endogenous properties can be leveraged to create and reconfigure the exogenous properties whereas the exogenous properties prose opportunities and restrictions on the use of endogenous properties. In addition, the study suggests that there are different emergent and engineered processes to design and manage ecosystem architecture and to influence both the endogenous and exogenous structural properties of ecosystem architecture. This study makes three main contributions. First, on the conceptual level, it brings coherence and direction to the fast growing body of literature on novel inter‐organizational arrangements, such as ecosystems. It does this by bridging and synthetizing three different streams of literature, namely the boundary, design and orchestration conception. Secondly, it sets out a framework that enhances our understanding of the structural properties of ecosystem architecture; of the processes to design and manage ecosystem architecture; and of their influence on the value creation potential of the ecosystem and the value capture potential of a particular firm. Thirdly, it offers empirical evidence of the structural properties and processes.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
There is a broad consensus among economists that technologicalchange has been a major contributor to the productivity growth and, hence, to the growth of the material welfare in western industrialized countries at least over the last century. Paradoxically, this issue has not been the focal point of theoretical economics. At the same time, we have witnessed the rise of the importance of technological issues at the strategic management level of business firms. Interestingly, the research has not accurately responded to this challenge either. The tension between the overwhelming empirical evidence of the importance of technology and its relative omission in the research offers a challenging target for a methodological endeavor. This study deals with the question of how different theories cope with technology and explain technological change. The focusis at the firm level and the analysis concentrates on metatheoretical issues, except for the last two chapters, which examine the problems of strategic management of technology. Here the aim is to build a new evolutionary-based theoreticalframework to analyze innovation processes at the firm level. The study consistsof ten chapters. Chapter 1 poses the research problem and contrasts the two basic approaches, neoclassical and evolutionary, to be analyzed. Chapter 2 introduces the methodological framework which is based on the methodology of isolation. Methodological and ontoogical commitments of the rival approaches are revealed and basic questions concerning their ways of theorizing are elaborated. Chapters 3-6 deal with the so-called substantive isolative criteria. The aim is to examine how different approaches cope with such critical issues as inherent uncertainty and complexity of innovative activities (cognitive isolations, chapter 3), theboundedness of rationality of innovating agents (behavioral isolations, chapter4), the multidimensional nature of technology (chapter 5), and governance costsrelated to technology (chapter 6). Chapters 7 and 8 put all these things together and look at the explanatory structures used by the neoclassical and evolutionary approaches in the light of substantive isolations. The last two cpahters of the study utilize the methodological framework and tools to appraise different economics-based candidates in the context of strategic management of technology. The aim is to analyze how different approaches answer the fundamental question: How can firms gain competitive advantages through innovations and how can the rents appropriated from successful innovations be sustained? The last chapter introduces a new evolutionary-based technology management framework. Also the largely omitted issues of entrepreneurship are examined.
Resumo:
Tässä diplomityössä perehdytään WAP:in Push -viitekehykseen. WAP-standardit määrittelevät kuinka Internet-tyyppisiä palveluita, joita voidaan käyttää erilaisia mobiileja päätelaiteitteita käyttäen, toteutetaan tehokkaalla ja verkkoteknologiasta riippumattomalla tavalla. WAP pohjautuu Internet:iin, mutta huomioi pienten päätelaitteiden ja mobiiliverkkojen rajoitukset ja erikoisominaisuudet. WAP Push viitekehys määrittelee verkon aloittaman palvelusisällön toimittamisen. Työn teoriaosassa käydään läpi yleinen WAP-arkkitehtuuri ja WAP-protokollapino käyttäen vertailukohtina lanka-Internetin arkkitehtuuria ja protokollapinoa. Edellistä perustana käyttäen tutustaan WAP Push -viitekehykseen. Käytännönosassa kuvataan WAP Push -välityspalvelimen suunnittelu ja kehitystyö. WAP Push -välityspalvelin on keskeinen verkkoelementti WAP Push -viitekehyksessä. WAP Push -välityspalvelin yhdistää Internetin ja mobiiliverkon tavalla, joka piilottaa teknologiaeroavaisuudet Internetissä olevalta palveluntuottajalta.
Resumo:
Mitä on läsnäolo? Tämä työ määrittelee läsnäolon tietyn henkilön, laitteen tai palvelun halukkuudeksi kommunikoida. Nykyään on olemassa lukuisia läsnäolotietoa levittäviä sovelluksia, joista jokainen käyttää erilaista protokollaa tehtävän suorittamiseen. Vasta viime aikoina sovellusten kehittäjät ovat huomanneet tarpeen yhdelle sovellukselle, joka kykenee tukemaan lukuisia läsnäoloprotokollia. Session Initiation Protocol (SIP) voi levittää läsnäolotietoa muiden ominaisuuksiensa lisäksi. Kun muita protokollia käytetään vain reaaliaikaiseen viestintään ja läsnäolotiedon lähetykseen, SIP pystyy moniin muihinkin asioihin. Se on alunperin suunniteltu aloittamaan, muuttamaan ja lopettamaan osapuolien välisiä multimediaistuntoja. Arkkitehtuurin toteutus käyttää kahta Symbian –käyttöjärjestelmän perusominaisuutta: asiakas-palvelin rakennetta ja kontaktitietokantaa. Asiakaspalvelin rakenne erottaa asiakkaan protokollasta tarjoten perustan laajennettavalle usean protokollan arkkitehtuurille ja kontaktitietokanta toimii läsnäolotietojen varastona. Työn tuloksena on Symbianin käyttöjärjestelmässä toimiva läsnäoloasiakas.
Resumo:
Tietoliikennelaitteiden toiminta perustuu yleisesti sovittuihin standardeihin ja suosituksiin. Standardinmukaisuuden varmistamiseksi tuotteet tulee testata sekä standardeja että muita tuotteita vastaan, jotta laitteiden yhteensopivuudesta voidaan varmistua. Ohjelmistojen koon ja monimutkaisuuden jatkuvasti kasvaessa myös tarve löytää automatisoituja testausmenetelmiä kasvaa. Diplomityön tavoitteena oli selvittää soveltuuko ANVL -protokollavalidointiohjelmisto Tellabsin modeemituotteiden hyväksyntätestaukseen ja sovittaa ANVL osaksi Tellabsin testauskäytäntöä. Diplomityössä käydään lisäksi läpi ohjelmistotestauksen peruskäsitteitä ja esitetään ANVL:n sisäinen rakenne ja toimintaperiaatteet. Työssä havaittiin, että ANVL sopii hyvin yksittäisten datasiirtolaitteiden validointitestaukseen. ANVL:n räätälöintimahdollisuudet ovat hyvät ja sitä on mahdollista laajentaa itse toteutetuilla protokollavalidointipaketeilla. Tuote tullaan ottamaan käyttöön Tellabsin uusissa tuotekehitysprojekteissa.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
The thesis deals with the phenomenon of learning between organizations in innovation networks that develop new products, services or processes. Inter organizational learning is studied especially at the level of the network. The role of the network can be seen as twofold: either the network is a context for inter organizational learning, if the learner is something else than the network (organization, group, individual), or the network itself is the learner. Innovations are regarded as a primary source of competitiveness and renewal in organizations. Networking has become increasingly common particularly because of the possibility to extend the resource base of the organization through partnerships and to concentrate on core competencies. Especially in innovation activities, networks provide the possibility to answer the complex needs of the customers faster and to share the costs and risks of the development work. Networked innovation activities are often organized in practice as distributed virtual teams, either within one organization or as cross organizational co operation. The role of technology is considered in the research mainly as an enabling tool for collaboration and learning. Learning has been recognized as one important collaborative process in networks or as a motivation for networking. It is even more important in the innovation context as an enabler of renewal, since the essence of the innovation process is creating new knowledge, processes, products and services. The thesis aims at providing enhanced understanding about the inter organizational learning phenomenon in and by innovation networks, especially concentrating on the network level. The perspectives used in the research are the theoretical viewpoints and concepts, challenges, and solutions for learning. The methods used in the study are literature reviews and empirical research carried out with semi structured interviews analyzed with qualitative content analysis. The empirical research concentrates on two different areas, firstly on the theoretical approaches to learning that are relevant to innovation networks, secondly on learning in virtual innovation teams. As a result, the research identifies insights and implications for learning in innovation networks from several viewpoints on organizational learning. Using multiple perspectives allows drawing a many sided picture of the learning phenomenon that is valuable because of the versatility and complexity of situations and challenges of learning in the context of innovation and networks. The research results also show some of the challenges of learning and possible solutions for supporting especially network level learning.
Resumo:
The purpose of this thesis was to investigate creating and improving category purchasing visibility for corporate procurement by utilizing financial information. This thesis was a part of the global category driven spend analysis project of Konecranes Plc. While creating general understanding for building category driven corporate spend visibility, the IT architecture and needed purchasing parameters for spend analysis were described. In the case part of the study three manufacturing plants of Konecranes Standard Lifting, Heavy Lifting and Services business areas were examined. This included investigating the operative IT system architecture and needed processes for building corporate spend visibility. The key findings of this study were the identification of the needed processes for gathering purchasing data elements while creating corporate spend visibility in fragmented source system environment. As an outcome of the study, roadmap presenting further development areas was introduced for Konecranes.
Resumo:
As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.
Resumo:
The necessity of EC (Electronic Commerce) and enterprise systems integration is perceived from the integrated nature of enterprise systems. The proven benefits of EC to provide competitive advantages to the organizations force enterprises to adopt and integrate EC with their enterprise systems. Integration is a complex task to facilitate seamless flow of information and data between different systems within and across enterprises. Different systems have different platforms, thus to integrate systems with different platforms and infrastructures, integration technologies, such as middleware, SOA (Service-Oriented Architecture), ESB (Enterprise Service Bus), JCA (J2EE Connector Architecture), and B2B (Business-to-Business) integration standards are required. Huge software vendors, such as Oracle, IBM, Microsoft, and SAP suggest various solutions to address EC and enterprise systems integration problems. There are limited numbers of literature about the integration of EC and enterprise systems in detail. Most of the studies in this area have focused on the factors which influence the adoption of EC by enterprise or other studies provide limited information about a specific platform or integration methodology in general. Therefore, this thesis is conducted to cover the technical details of EC and enterprise systems integration and covers both the adoption factors and integration solutions. In this study, many literature was reviewed and different solutions were investigated. Different enterprise integration approaches as well as most popular integration technologies were investigated. Moreover, various methodologies of integrating EC and enterprise systems were studied in detail and different solutions were examined. In this study, the influential factors to adopt EC in enterprises were studied based on previous literature and categorized to technical, social, managerial, financial, and human resource factors. Moreover, integration technologies were categorized based on three levels of integration, which are data, application, and process. In addition, different integration approaches were identified and categorized based on their communication and platform. Also, different EC integration solutions were investigated and categorized based on the identified integration approaches. By considering different aspects of integration, this study is a great asset to the architectures, developers, and system integrators in order to integrate and adopt EC with enterprise systems.
Resumo:
Technological innovations, the development of the internet, and globalization have increased the number and complexity of web applications. As a result, keeping web user interfaces understandable and usable (in terms of ease-of-use, effectiveness, and satisfaction) is a challenge. As part of this, designing userintuitive interface signs (i.e., the small elements of web user interface, e.g., navigational link, command buttons, icons, small images, thumbnails, etc.) is an issue for designers. Interface signs are key elements of web user interfaces because ‘interface signs’ act as a communication artefact to convey web content and system functionality, and because users interact with systems by means of interface signs. In the light of the above, applying semiotic (i.e., the study of signs) concepts on web interface signs will contribute to discover new and important perspectives on web user interface design and evaluation. The thesis mainly focuses on web interface signs and uses the theory of semiotic as a background theory. The underlying aim of this thesis is to provide valuable insights to design and evaluate web user interfaces from a semiotic perspective in order to improve overall web usability. The fundamental research question is formulated as What do practitioners and researchers need to be aware of from a semiotic perspective when designing or evaluating web user interfaces to improve web usability? From a methodological perspective, the thesis follows a design science research (DSR) approach. A systematic literature review and six empirical studies are carried out in this thesis. The empirical studies are carried out with a total of 74 participants in Finland. The steps of a design science research process are followed while the studies were designed and conducted; that includes (a) problem identification and motivation, (b) definition of objectives of a solution, (c) design and development, (d) demonstration, (e) evaluation, and (f) communication. The data is collected using observations in a usability testing lab, by analytical (expert) inspection, with questionnaires, and in structured and semi-structured interviews. User behaviour analysis, qualitative analysis and statistics are used to analyze the study data. The results are summarized as follows and have lead to the following contributions. Firstly, the results present the current status of semiotic research in UI design and evaluation and highlight the importance of considering semiotic concepts in UI design and evaluation. Secondly, the thesis explores interface sign ontologies (i.e., sets of concepts and skills that a user should know to interpret the meaning of interface signs) by providing a set of ontologies used to interpret the meaning of interface signs, and by providing a set of features related to ontology mapping in interpreting the meaning of interface signs. Thirdly, the thesis explores the value of integrating semiotic concepts in usability testing. Fourthly, the thesis proposes a semiotic framework (Semiotic Interface sign Design and Evaluation – SIDE) for interface sign design and evaluation in order to make them intuitive for end users and to improve web usability. The SIDE framework includes a set of determinants and attributes of user-intuitive interface signs, and a set of semiotic heuristics to design and evaluate interface signs. Finally, the thesis assesses (a) the quality of the SIDE framework in terms of performance metrics (e.g., thoroughness, validity, effectiveness, reliability, etc.) and (b) the contributions of the SIDE framework from the evaluators’ perspective.
Resumo:
Thesis: A liquid-cooled, direct-drive, permanent-magnet, synchronous generator with helical, double-layer, non-overlapping windings formed from a copper conductor with a coaxial internal coolant conduit offers an excellent combination of attributes to reliably provide economic wind power for the coming generation of wind turbines with power ratings between 5 and 20MW. A generator based on the liquid-cooled architecture proposed here will be reliable and cost effective. Its smaller size and mass will reduce build, transport, and installation costs. Summary: Converting wind energy into electricity and transmitting it to an electrical power grid to supply consumers is a relatively new and rapidly developing method of electricity generation. In the most recent decade, the increase in wind energy’s share of overall energy production has been remarkable. Thousands of land-based and offshore wind turbines have been commissioned around the globe, and thousands more are being planned. The technologies have evolved rapidly and are continuing to evolve, and wind turbine sizes and power ratings are continually increasing. Many of the newer wind turbine designs feature drivetrains based on Direct-Drive, Permanent-Magnet, Synchronous Generators (DD-PMSGs). Being low-speed high-torque machines, the diameters of air-cooled DD-PMSGs become very large to generate higher levels of power. The largest direct-drive wind turbine generator in operation today, rated just below 8MW, is 12m in diameter and approximately 220 tonne. To generate higher powers, traditional DD-PMSGs would need to become extraordinarily large. A 15MW air-cooled direct-drive generator would be of colossal size and tremendous mass and no longer economically viable. One alternative to increasing diameter is instead to increase torque density. In a permanent magnet machine, this is best done by increasing the linear current density of the stator windings. However, greater linear current density results in more Joule heating, and the additional heat cannot be removed practically using a traditional air-cooling approach. Direct liquid cooling is more effective, and when applied directly to the stator windings, higher linear current densities can be sustained leading to substantial increases in torque density. The higher torque density, in turn, makes possible significant reductions in DD-PMSG size. Over the past five years, a multidisciplinary team of researchers has applied a holistic approach to explore the application of liquid cooling to permanent-magnet wind turbine generator design. The approach has considered wind energy markets and the economics of wind power, system reliability, electromagnetic behaviors and design, thermal design and performance, mechanical architecture and behaviors, and the performance modeling of installed wind turbines. This dissertation is based on seven publications that chronicle the work. The primary outcomes are the proposal of a novel generator architecture, a multidisciplinary set of analyses to predict the behaviors, and experimentation to demonstrate some of the key principles and validate the analyses. The proposed generator concept is a direct-drive, surface-magnet, synchronous generator with fractional-slot, duplex-helical, double-layer, non-overlapping windings formed from a copper conductor with a coaxial internal coolant conduit to accommodate liquid coolant flow. The novel liquid-cooling architecture is referred to as LC DD-PMSG. The first of the seven publications summarized in this dissertation discusses the technological and economic benefits and limitations of DD-PMSGs as applied to wind energy. The second publication addresses the long-term reliability of the proposed LC DD-PMSG design. Publication 3 examines the machine’s electromagnetic design, and Publication 4 introduces an optimization tool developed to quickly define basic machine parameters. The static and harmonic behaviors of the stator and rotor wheel structures are the subject of Publication 5. And finally, Publications 6 and 7 examine steady-state and transient thermal behaviors. There have been a number of ancillary concrete outcomes associated with the work including the following. X Intellectual Property (IP) for direct liquid cooling of stator windings via an embedded coaxial coolant conduit, IP for a lightweight wheel structure for lowspeed, high-torque electrical machinery, and IP for numerous other details of the LC DD-PMSG design X Analytical demonstrations of the equivalent reliability of the LC DD-PMSG; validated electromagnetic, thermal, structural, and dynamic prediction models; and an analytical demonstration of the superior partial load efficiency and annual energy output of an LC DD-PMSG design X A set of LC DD-PMSG design guidelines and an analytical tool to establish optimal geometries quickly and early on X Proposed 8 MW LC DD-PMSG concepts for both inner and outer rotor configurations Furthermore, three technologies introduced could be relevant across a broader spectrum of applications. 1) The cost optimization methodology developed as part of this work could be further improved to produce a simple tool to establish base geometries for various electromagnetic machine types. 2) The layered sheet-steel element construction technology used for the LC DD-PMSG stator and rotor wheel structures has potential for a wide range of applications. And finally, 3) the direct liquid-cooling technology could be beneficial in higher speed electromotive applications such as vehicular electric drives.