904 resultados para 291605 Processor Architectures
Resumo:
App Engine on lyhenne englanninkielisistä termeistä application, sovellus ja engine, moottori. Kyseessä on Google, Inc. -konsernin toteuttama kaupallinen palvelu, joka noudattaa pilvimallin tietojenkäsittelyn periaatteita ja mahdollistaa asiakkaan oman sovelluskehityksen. Järjestelmään on mahdollista ohjelmoida itse ideoitu palvelu Internet - verkon välityksellä käytettäväksi, joko yksityisesti tai julkisesti. Kyse on siis hajautetusta palvelinjärjestelmästä, jonka tarjoaa dynaamisesti kuormitukseen sopeutuvan sovellusalustan, jossa asiakas ei vuokraa virtuaalikoneita. Myös järjestelmän tarjoama tallennuskapasiteetti on saatavilla joustavasti. Itse kandidaatintyössä syvennytään yksityiskohtaisemmin sovelluksen toteuttamiseen palvelussa, rajoitteisiin ja soveltuvuuteen. Alussa käydään läpi pilvikäsite, joista monilla tietokoneiden käyttäjillä on epäselvä käsitys. Erilaisia kokonaisuuksia voidaan luoda erittäin monella tavalla, joista rajaamme käsittelyn kohteeksi toteuttamiskelpoiset yleiset ratkaisut.
Resumo:
A new Cu(II) trimers, [Cu3(dcp)2(H2O)8]. 4DMF, with the ligand 3,5-pyrazoledicarboxylic acid monohydrate (H3dcp) has been prepared by solvent method. Its solid-state structure has been characterized by elemental analysis, thermal analysis (TGA and DSC), and single crystal X-ray diffraction. X-ray crystallographic studies reveal that this complex has extended 1-D,2-D and 3-D supramolecular architectures directed by weak interactions (hydrogen bond and aromatic π-π stacking interaction) leading to a sandwich solid-state structure.
Resumo:
Vaihtosuuntaajan IGBT-moduulin liitosten lämpötiloja ei voida suoraan mitata, joten niiden arviointiin tarvitaan reaaliaikainen lämpömalli. Tässä työssä on tavoitteena kehittää tähän tarkoitukseen C-kielellä implementoitu ratkaisu, joka on riittävän tarkka ja samalla mahdollisimman laskennallisesti tehokas. Ohjelmallisen toteutuksen täytyy myös sopia erilaisille moduulityypeille ja sen on tarvittaessa otettava huomioon saman moduulin muiden sirujen lämmittävä vaikutus toisiinsa. Kirjallisuuskatsauksen perusteella valitaan olemassa olevista lämpömalleista käytännön toteutuksen pohjaksi lämpöimpedanssimatriisiin perustuva malli. Lämpöimpedanssimatriisista tehdään Simulink-ohjelmalla s-tason simulointimalli, jota käytetään referenssinä muun muassa implementoinnin tarkkuuden verifiointiin. Lämpömalli tarvitsee tiedon vaihtosuuntaajan häviöistä, joten työssä on selvitetty eri vaihtoehtoja häviölaskentaan. Lämpömallin kehittäminen s-tason mallista valmiiksi C-kieliseksi koodiksi on kuvattu tarkasti. Ensin s-tason malli diskretoidaan z-tasoon. Z-tason siirtofunktiot muutetaan puolestaan ensimmäisen kertaluvun differenssiyhtälöiksi. Työssä kehitetty monen aikatason lämpömalli saadaan jakamalla ensimmäisen kertaluvun differenssiyhtälöt eri aikatasoille suoritettavaksi sen mukaan, mikä niiden kuvaileman termin vaatima päivitysnopeus on. Tällainen toteutus voi parhaimmillaan kuluttaa alle viidesosan kellojaksoja verrattuna suoraviivaiseen yhden aikatason toteutukseen. Implementoinnin tarkkuus on hyvä. Implementoinnin vaatimia suoritusaikoja testattiin Texas Instrumentsin TMS320C6727- prosessorilla (300 MHz). Esimerkkimallin laskemisen määritettiin kuluttavan vaihtosuuntaajan toimiessa 5 kHz kytkentätaajuudella vain 0,4 % prosessorin kellojaksoista. Toteutuksen tarkkuus ja laskentakapasiteetin vähäinen vaatimus mahdollistavat lämpömallin käyttämisen lämpösuojaukseen ja lisäämisen osaksi muuta jo prosessorilla olemassa olevaa systeemiä.
Resumo:
This artcle describes work done with enterprise architecture of the National Digital Library. The National Digital Library is an initiative of the Finnish Ministry of Education and Culture. Its purpose is to promote the availability of the digital information resources of archives, libraries and museums, and to develope the long-term preservation of digital cultural heritage materials. Enterprise architectures are a tool for strategic management and planning. An enterprise architecture also functions as an aid at a more practical level. It shows, for example, what kind of changes and improvements may be made in one system without overlap or conflict with other systems.
Resumo:
Robotic grasping has been studied increasingly for a few decades. While progress has been made in this field, robotic hands are still nowhere near the capability of human hands. However, in the past few years, the increase in computational power and the availability of commercial tactile sensors have made it easier to develop techniques that exploit the feedback from the hand itself, the sense of touch. The focus of this thesis lies in the use of this sense. The work described in this thesis focuses on robotic grasping from two different viewpoints: robotic systems and data-driven grasping. The robotic systems viewpoint describes a complete architecture for the act of grasping and, to a lesser extent, more general manipulation. Two central claims that the architecture was designed for are hardware independence and the use of sensors during grasping. These properties enables the use of multiple different robotic platforms within the architecture. Secondly, new data-driven methods are proposed that can be incorporated into the grasping process. The first of these methods is a novel way of learning grasp stability from the tactile and haptic feedback of the hand instead of analytically solving the stability from a set of known contacts between the hand and the object. By learning from the data directly, there is no need to know the properties of the hand, such as kinematics, enabling the method to be utilized with complex hands. The second novel method, probabilistic grasping, combines the fields of tactile exploration and grasp planning. By employing well-known statistical methods and pre-existing knowledge of an object, object properties, such as pose, can be inferred with related uncertainty. This uncertainty is utilized by a grasp planning process which plans for stable grasps under the inferred uncertainty.
Resumo:
Personalised ubiquitous services have rapidly proliferated due technological advancements in sensing, ubiquitous and mobile computing. Evolving societal trends, business and the economic potential of Personal Information (PI) have overlapped the service niches. At the same time, the societal thirst for more personalised services has increased and are met by soliciting deeper and more privacy invasive PI from customers. Consequentially, reinforcing traditional privacy challenges and unearthed new risks that render classical safeguards ine ective. The absence of solutions to criticise personalised ubiquitous services from privacy perspectives, aggravates the situation. This thesis presents a solution permitting users' PI, stored in their mobile terminals to be disclosed to services in privacy preserving manner for personalisation needs. The approach termed, Mobile Electronic Personality Version 2 (ME2.0), is compared to alternative mechanisms. Within ME2.0, PI handling vulnerabilities of ubiquitous services are identi ed and sensitised on their practices and privacy implications. Vulnerability where PI may leak through covert solicits, excessive acquisitions and legitimate data re-purposing to erode users privacy are also considered. In this thesis, the design, components, internal structures, architectures, scenarios and evaluations of ME2.0 are detailed. The design addresses implications and challenges leveraged by mobile terminals. ME2.0 components and internal structures discusses the functions related to how PI pieces are stored and handled by terminals and services. The architecture focusses on di erent components and their exchanges with services. Scenarios where ME2.0 is used are presented from di erent environment views, before evaluating for performance, privacy and usability.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
The thesis consists of four studies (articles I–IV) and a comprehensive summary. The aim is to deepen understanding and knowledge of newly qualified teachers’ experiences of their induction practices. The research interest thus reflects the ambition to strengthen the research-based platform for support measures. The aim can be specified in the following four sub-areas: to scrutinise NQTs’ experiences of the profession in the transition from education to work (study I), to describe and analyse NQTs’ experiences of their first encounters with school and classroom (study II), to explore NQTs’ experiences of their relationships within the school community (study III), to view NQTs’ experiences of support through peer-group mentoring as part of the wider aim of collaboration and assessment (study IV). The overall theoretical perspective constitutes teachers’ professional development. Induction forms an essential part of this continuum and can primarily be seen as a socialisation process into the profession and the social working environment of schools, as a unique phase of teachers’ development contributing to certain experiences, and as a formal programme designed to support new teachers. These lines of research are initiated in the separate studies (I–IV) and deepened in the theoretical part of the comprehensive summary. In order to appropriately understand induction as a specific practice the lines of research are in the end united and discussed with help of practice theory. More precisely the theory of practice architectures, including semantic space, physical space-time and social space, are used. The methodological approach to integrating the four studies is above all represented by abduction and meta-synthesis. Data has been collected through a questionnaire survey, with mainly open-ended questions, and altogether ten focus group meetings with newly qualified primary school teachers in 2007–2008. The teachers (n=88 in questionnaire, n=17 in focus groups), had between one and three years of teaching experience. Qualitative content analysis and narrative analysis were used when analysing the data. What is then the collected picture of induction or the first years in the profession if scrutinising the results presented in the articles? Four dimensions seem especially to permeate the studies and emerge when they are put together. The first dimension, the relational ˗ emotional, captures the social nature of induction and teacher’s work and the emotional character intimately intertwined. The second dimension, the tensional ˗ mutable, illustrates the intense pace of induction, together with the diffuse and unclear character of a teacher’s job. The third dimension, the instructive ˗ developmental, depicts induction as a unique and intensive phase of learning, maturity and professional development. Finally, the fourth dimension, the reciprocal ˗ professional, stresses the importance of reciprocity and collaboration in induction, both formally and informally. The outlined four dimensions, or integration of results, describing induction from the experiences of new teachers, constitute part of a new synthesis, induction practice. This synthesis was generated from viewing the integrated results through the theoretical lens of practice architecture and the three spaces, semantic space, physical space-time and social space. In this way, a more comprehensive, refined and partially new architecture of teachers’ induction practices are presented and discussed.
Resumo:
Multiprocessing is a promising solution to meet the requirements of near future applications. To get full benefit from parallel processing, a manycore system needs efficient, on-chip communication architecture. Networkon- Chip (NoC) is a general purpose communication concept that offers highthroughput, reduced power consumption, and keeps complexity in check by a regular composition of basic building blocks. This thesis presents power efficient communication approaches for networked many-core systems. We address a range of issues being important for designing power-efficient manycore systems at two different levels: the network-level and the router-level. From the network-level point of view, exploiting state-of-the-art concepts such as Globally Asynchronous Locally Synchronous (GALS), Voltage/ Frequency Island (VFI), and 3D Networks-on-Chip approaches may be a solution to the excessive power consumption demanded by today’s and future many-core systems. To this end, a low-cost 3D NoC architecture, based on high-speed GALS-based vertical channels, is proposed to mitigate high peak temperatures, power densities, and area footprints of vertical interconnects in 3D ICs. To further exploit the beneficial feature of a negligible inter-layer distance of 3D ICs, we propose a novel hybridization scheme for inter-layer communication. In addition, an efficient adaptive routing algorithm is presented which enables congestion-aware and reliable communication for the hybridized NoC architecture. An integrated monitoring and management platform on top of this architecture is also developed in order to implement more scalable power optimization techniques. From the router-level perspective, four design styles for implementing power-efficient reconfigurable interfaces in VFI-based NoC systems are proposed. To enhance the utilization of virtual channel buffers and to manage their power consumption, a partial virtual channel sharing method for NoC routers is devised and implemented. Extensive experiments with synthetic and real benchmarks show significant power savings and mitigated hotspots with similar performance compared to latest NoC architectures. The thesis concludes that careful codesigned elements from different network levels enable considerable power savings for many-core systems.
Resumo:
Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.
Resumo:
-
Resumo:
Abstract—Concept development and experimentation (CD&E) plays an important role in driving strategic transformation in the military community. Defence architecture frameworks, such as the NATO architecture framework, are considered excellent means to support CD&E. There is not much empirical evidence, however, to indicate how enterprise architectures (EA) are applied in the military community or particularly in military CD&E. Consequently, this paper describes and discusses empirical application of the EA approach in CD&E. The research method in the paper is a case study. Situational method engineering (SiME) is used as a framework to adapt the EA approach to the case project of the paper. The findings of the paper suggest that the EA is applicable to CD&E work, although all aspects of the original concept could not be expressed in the EA model of the case project. The results also show that the SiME method can support in applying the EA framework to the CD&E in the case project.
Resumo:
The political environment of security and defence has changed radically in the Western industrialised world since the Cold War. As a response to these changes, since the beginning of the twenty-first century, most Western countries have adopted a ‘capabilities-based approach’ to developing and operating their armed forces. More responsive and versatile military capabilities must be developed to meet the contemporary challenges. The systems approach is seen as a beneficial means of overcoming traps in resolving complex real -world issues by conventional thinking. The main objectives of this dissertation are to explore and assess the means to enhance the development of military capabilities both in concept development and experimentation (CD&E) and in national defence materiel collaboration issues. This research provides a unique perspective, a systems approach, to the development areas of concern in resolving complex real-world issues. This dissertation seeks to increase the understanding of the military capability concept both as a whole and with in its life cycle. The dissertation follows the generic functionalist systems methodology by Jackson. The methodology applies a comprehensive set of constitutive rules to examine the research objectives. This dissertation makes contribution to current studies about military capability. It presents two interdepen dent conceptual capability models: the comprehensive capability meta-model (CCMM) and the holistic capability life cycle model (HCLCM). These models holistically and systematically complement the existing, but still evolving, understanding of military capability and its life cycle. In addition, this dissertation contributes to the scientific discussion of defence procurement in its broad meaning by introducing the holistic model about the national defence materiel collaboration between the defence forces, defence industry and academia. The model connects the key collaborative mechanisms, which currently work in isolation from each other, and take into consideration the unique needs of each partner. This dissertation contributes empirical evidence regarding the benefits of enterprise architectures (EA) to CD&E. The EA approach may add value to traditional concept development by increasing the clarity, consistency and completeness of the concept. The most important use considered for EA in CD&E is that it enables further utilisation of the concept created in the case project.
Resumo:
This doctoral dissertation investigates the adult education policy of the European Union (EU) in the framework of the Lisbon agenda 2000–2010, with a particular focus on the changes of policy orientation that occurred during this reference decade. The year 2006 can be considered, in fact, a turning point for the EU policy-making in the adult learning sector: a radical shift from a wide--ranging and comprehensive conception of educating adults towards a vocationally oriented understanding of this field and policy area has been observed, in particular in the second half of the so--called ‘Lisbon decade’. In this light, one of the principal objectives of the mainstream policy set by the Lisbon Strategy, that of fostering all forms of participation of adults in lifelong learning paths, appears to have muted its political background and vision in a very short period of time, reflecting an underlying polarisation and progressive transformation of European policy orientations. Hence, by means of content analysis and process tracing, it is shown that the new target of the EU adult education policy, in this framework, has shifted from citizens to workers, and the competence development model, borrowed from the corporate sector, has been established as the reference for the new policy road maps. This study draws on the theory of governance architectures and applies a post-ontological perspective to discuss whether the above trends are intrinsically due to the nature of the Lisbon Strategy, which encompasses education policies, and to what extent supranational actors and phenomena such as globalisation influence the European governance and decision--making. Moreover, it is shown that the way in which the EU is shaping the upgrading of skills and competences of adult learners is modeled around the needs of the ‘knowledge economy’, thus according a great deal of importance to the ‘new skills for new jobs’ and perhaps not enough to life skills in its broader sense which include, for example, social and civic competences: these are actually often promoted but rarely implemented in depth in the EU policy documents. In this framework, it is conveyed how different EU policy areas are intertwined and interrelated with global phenomena, and it is emphasised how far the building of the EU education systems should play a crucial role in the formation of critical thinking, civic competences and skills for a sustainable democratic citizenship, from which a truly cohesive and inclusive society fundamentally depend, and a model of environmental and cosmopolitan adult education is proposed in order to address the challenges of the new millennium. In conclusion, an appraisal of the EU’s public policy, along with some personal thoughts on how progress might be pursued and actualised, is outlined.
Resumo:
This work presents a geometric nonlinear dynamic analysis of plates and shells using eight-node hexahedral isoparametric elements. The main features of the present formulation are: (a) the element matrices are obtained using reduced integrations with hourglass control; (b) an explicit Taylor-Galerkin scheme is used to carry out the dynamic analysis, solving the corresponding equations of motion in terms of velocity components; (c) the Truesdell stress rate tensor is used; (d) the vector processor facilities existing in modern supercomputers were used. The results obtained are comparable with previous solutions in terms of accuracy and computational performance.