943 resultados para Metallo-supramolecular Architectures
Resumo:
The thesis studies possibility of using embedded controller in a crane application and furthermore defines requirements when designing such a controller. Basic crane control architectures are considered and compared. Then embedded controller product life cycle is described: considering such issues like microcontroller selection, software/hardware design and application development tools. Finally, available embedded controller is described and used for implementing crane control.
Resumo:
Sähköinen asiointi on yleistynyt viime vuosien aikana nopeasti. Toimivia julkishallinnon sähköisiä palveluita kehitetään jatkuvasti yhä uusiin tarkoituksiin. Tällä hetkellä sähköisten palveluiden tarjontaa leimaa hajanaisuus. Jokainen palveluita tarjoava organisaatio kehittää palveluita itsenäisesti, omalla tavallaan. Palveluiden kehittäminen asiakaslähtöisesti edellyttää palvelukokonaisuuksien suunnittelua arkkitehtuuritasolla. Yksittäisten sovellusten kehittämisen lisäksi on huomioitava näiden sovellusten toimivuus yhteen, osana suurempaa kokonaisuutta. Palvelukeskeisessä arkkitehtuurisuunnittelussa lähtökohta on se, että jokainen yksittäinen sovellus on itsessään palvelu, joka tuottaa määritelmänsä mukaisen tulosteen sellaisessa muodossa, jota muut sovellukset kykenevät ymmärtämään. Työn tuloksena havaittiin, että vaikka sähköiselle hallinnolle on kehitetty useita viitearkkitehtuureja, on käytännön työ palveluiden integroimiseksi vielä kesken. Yksittäisten palveluiden liittäminen suuremmiksi kokonaisuuksiksi vaatii määrätietoista arkkitehtuurisuunnittelua sekä kansallista suunnittelun koordinointia. Tiedon jakaminen organisaatiorajojen yli muodostaa joukon kysymyksiä, joihin ei ole vielä vastausta. Tietosuojaa ja yksityisyyttä ei voida uhrata sähköistä hallintoa suunniteltaessa.
Resumo:
This artcle describes work done with enterprise architecture of the National Digital Library. The National Digital Library is an initiative of the Finnish Ministry of Education and Culture. Its purpose is to promote the availability of the digital information resources of archives, libraries and museums, and to develope the long-term preservation of digital cultural heritage materials. Enterprise architectures are a tool for strategic management and planning. An enterprise architecture also functions as an aid at a more practical level. It shows, for example, what kind of changes and improvements may be made in one system without overlap or conflict with other systems.
Resumo:
As technology geometries have shrunk to the deep submicron regime, the communication delay and power consumption of global interconnections in high performance Multi- Processor Systems-on-Chip (MPSoCs) are becoming a major bottleneck. The Network-on- Chip (NoC) architecture paradigm, based on a modular packet-switched mechanism, can address many of the on-chip communication issues such as performance limitations of long interconnects and integration of large number of Processing Elements (PEs) on a chip. The choice of routing protocol and NoC structure can have a significant impact on performance and power consumption in on-chip networks. In addition, building a high performance, area and energy efficient on-chip network for multicore architectures requires a novel on-chip router allowing a larger network to be integrated on a single die with reduced power consumption. On top of that, network interfaces are employed to decouple computation resources from communication resources, to provide the synchronization between them, and to achieve backward compatibility with existing IP cores. Three adaptive routing algorithms are presented as a part of this thesis. The first presented routing protocol is a congestion-aware adaptive routing algorithm for 2D mesh NoCs which does not support multicast (one-to-many) traffic while the other two protocols are adaptive routing models supporting both unicast (one-to-one) and multicast traffic. A streamlined on-chip router architecture is also presented for avoiding congested areas in 2D mesh NoCs via employing efficient input and output selection. The output selection utilizes an adaptive routing algorithm based on the congestion condition of neighboring routers while the input selection allows packets to be serviced from each input port according to its congestion level. Moreover, in order to increase memory parallelism and bring compatibility with existing IP cores in network-based multiprocessor architectures, adaptive network interface architectures are presented to use multiple SDRAMs which can be accessed simultaneously. In addition, a smart memory controller is integrated in the adaptive network interface to improve the memory utilization and reduce both memory and network latencies. Three Dimensional Integrated Circuits (3D ICs) have been emerging as a viable candidate to achieve better performance and package density as compared to traditional 2D ICs. In addition, combining the benefits of 3D IC and NoC schemes provides a significant performance gain for 3D architectures. In recent years, inter-layer communication across multiple stacked layers (vertical channel) has attracted a lot of interest. In this thesis, a novel adaptive pipeline bus structure is proposed for inter-layer communication to improve the performance by reducing the delay and complexity of traditional bus arbitration. In addition, two mesh-based topologies for 3D architectures are also introduced to mitigate the inter-layer footprint and power dissipation on each layer with a small performance penalty.
Resumo:
Robotic grasping has been studied increasingly for a few decades. While progress has been made in this field, robotic hands are still nowhere near the capability of human hands. However, in the past few years, the increase in computational power and the availability of commercial tactile sensors have made it easier to develop techniques that exploit the feedback from the hand itself, the sense of touch. The focus of this thesis lies in the use of this sense. The work described in this thesis focuses on robotic grasping from two different viewpoints: robotic systems and data-driven grasping. The robotic systems viewpoint describes a complete architecture for the act of grasping and, to a lesser extent, more general manipulation. Two central claims that the architecture was designed for are hardware independence and the use of sensors during grasping. These properties enables the use of multiple different robotic platforms within the architecture. Secondly, new data-driven methods are proposed that can be incorporated into the grasping process. The first of these methods is a novel way of learning grasp stability from the tactile and haptic feedback of the hand instead of analytically solving the stability from a set of known contacts between the hand and the object. By learning from the data directly, there is no need to know the properties of the hand, such as kinematics, enabling the method to be utilized with complex hands. The second novel method, probabilistic grasping, combines the fields of tactile exploration and grasp planning. By employing well-known statistical methods and pre-existing knowledge of an object, object properties, such as pose, can be inferred with related uncertainty. This uncertainty is utilized by a grasp planning process which plans for stable grasps under the inferred uncertainty.
Resumo:
Personalised ubiquitous services have rapidly proliferated due technological advancements in sensing, ubiquitous and mobile computing. Evolving societal trends, business and the economic potential of Personal Information (PI) have overlapped the service niches. At the same time, the societal thirst for more personalised services has increased and are met by soliciting deeper and more privacy invasive PI from customers. Consequentially, reinforcing traditional privacy challenges and unearthed new risks that render classical safeguards ine ective. The absence of solutions to criticise personalised ubiquitous services from privacy perspectives, aggravates the situation. This thesis presents a solution permitting users' PI, stored in their mobile terminals to be disclosed to services in privacy preserving manner for personalisation needs. The approach termed, Mobile Electronic Personality Version 2 (ME2.0), is compared to alternative mechanisms. Within ME2.0, PI handling vulnerabilities of ubiquitous services are identi ed and sensitised on their practices and privacy implications. Vulnerability where PI may leak through covert solicits, excessive acquisitions and legitimate data re-purposing to erode users privacy are also considered. In this thesis, the design, components, internal structures, architectures, scenarios and evaluations of ME2.0 are detailed. The design addresses implications and challenges leveraged by mobile terminals. ME2.0 components and internal structures discusses the functions related to how PI pieces are stored and handled by terminals and services. The architecture focusses on di erent components and their exchanges with services. Scenarios where ME2.0 is used are presented from di erent environment views, before evaluating for performance, privacy and usability.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
The thesis consists of four studies (articles I–IV) and a comprehensive summary. The aim is to deepen understanding and knowledge of newly qualified teachers’ experiences of their induction practices. The research interest thus reflects the ambition to strengthen the research-based platform for support measures. The aim can be specified in the following four sub-areas: to scrutinise NQTs’ experiences of the profession in the transition from education to work (study I), to describe and analyse NQTs’ experiences of their first encounters with school and classroom (study II), to explore NQTs’ experiences of their relationships within the school community (study III), to view NQTs’ experiences of support through peer-group mentoring as part of the wider aim of collaboration and assessment (study IV). The overall theoretical perspective constitutes teachers’ professional development. Induction forms an essential part of this continuum and can primarily be seen as a socialisation process into the profession and the social working environment of schools, as a unique phase of teachers’ development contributing to certain experiences, and as a formal programme designed to support new teachers. These lines of research are initiated in the separate studies (I–IV) and deepened in the theoretical part of the comprehensive summary. In order to appropriately understand induction as a specific practice the lines of research are in the end united and discussed with help of practice theory. More precisely the theory of practice architectures, including semantic space, physical space-time and social space, are used. The methodological approach to integrating the four studies is above all represented by abduction and meta-synthesis. Data has been collected through a questionnaire survey, with mainly open-ended questions, and altogether ten focus group meetings with newly qualified primary school teachers in 2007–2008. The teachers (n=88 in questionnaire, n=17 in focus groups), had between one and three years of teaching experience. Qualitative content analysis and narrative analysis were used when analysing the data. What is then the collected picture of induction or the first years in the profession if scrutinising the results presented in the articles? Four dimensions seem especially to permeate the studies and emerge when they are put together. The first dimension, the relational ˗ emotional, captures the social nature of induction and teacher’s work and the emotional character intimately intertwined. The second dimension, the tensional ˗ mutable, illustrates the intense pace of induction, together with the diffuse and unclear character of a teacher’s job. The third dimension, the instructive ˗ developmental, depicts induction as a unique and intensive phase of learning, maturity and professional development. Finally, the fourth dimension, the reciprocal ˗ professional, stresses the importance of reciprocity and collaboration in induction, both formally and informally. The outlined four dimensions, or integration of results, describing induction from the experiences of new teachers, constitute part of a new synthesis, induction practice. This synthesis was generated from viewing the integrated results through the theoretical lens of practice architecture and the three spaces, semantic space, physical space-time and social space. In this way, a more comprehensive, refined and partially new architecture of teachers’ induction practices are presented and discussed.
Resumo:
Multiprocessing is a promising solution to meet the requirements of near future applications. To get full benefit from parallel processing, a manycore system needs efficient, on-chip communication architecture. Networkon- Chip (NoC) is a general purpose communication concept that offers highthroughput, reduced power consumption, and keeps complexity in check by a regular composition of basic building blocks. This thesis presents power efficient communication approaches for networked many-core systems. We address a range of issues being important for designing power-efficient manycore systems at two different levels: the network-level and the router-level. From the network-level point of view, exploiting state-of-the-art concepts such as Globally Asynchronous Locally Synchronous (GALS), Voltage/ Frequency Island (VFI), and 3D Networks-on-Chip approaches may be a solution to the excessive power consumption demanded by today’s and future many-core systems. To this end, a low-cost 3D NoC architecture, based on high-speed GALS-based vertical channels, is proposed to mitigate high peak temperatures, power densities, and area footprints of vertical interconnects in 3D ICs. To further exploit the beneficial feature of a negligible inter-layer distance of 3D ICs, we propose a novel hybridization scheme for inter-layer communication. In addition, an efficient adaptive routing algorithm is presented which enables congestion-aware and reliable communication for the hybridized NoC architecture. An integrated monitoring and management platform on top of this architecture is also developed in order to implement more scalable power optimization techniques. From the router-level perspective, four design styles for implementing power-efficient reconfigurable interfaces in VFI-based NoC systems are proposed. To enhance the utilization of virtual channel buffers and to manage their power consumption, a partial virtual channel sharing method for NoC routers is devised and implemented. Extensive experiments with synthetic and real benchmarks show significant power savings and mitigated hotspots with similar performance compared to latest NoC architectures. The thesis concludes that careful codesigned elements from different network levels enable considerable power savings for many-core systems.
Resumo:
Rapid ongoing evolution of multiprocessors will lead to systems with hundreds of processing cores integrated in a single chip. An emerging challenge is the implementation of reliable and efficient interconnection between these cores as well as other components in the systems. Network-on-Chip is an interconnection approach which is intended to solve the performance bottleneck caused by traditional, poorly scalable communication structures such as buses. However, a large on-chip network involves issues related to congestion problems and system control, for instance. Additionally, faults can cause problems in multiprocessor systems. These faults can be transient faults, permanent manufacturing faults, or they can appear due to aging. To solve the emerging traffic management, controllability issues and to maintain system operation regardless of faults a monitoring system is needed. The monitoring system should be dynamically applicable to various purposes and it should fully cover the system under observation. In a large multiprocessor the distances between components can be relatively long. Therefore, the system should be designed so that the amount of energy-inefficient long-distance communication is minimized. This thesis presents a dynamically clustered distributed monitoring structure. The monitoring is distributed so that no centralized control is required for basic tasks such as traffic management and task mapping. To enable extensive analysis of different Network-on-Chip architectures, an in-house SystemC based simulation environment was implemented. It allows transaction level analysis without time consuming circuit level implementations during early design phases of novel architectures and features. The presented analysis shows that the dynamically clustered monitoring structure can be efficiently utilized for traffic management in faulty and congested Network-on-Chip-based multiprocessor systems. The monitoring structure can be also successfully applied for task mapping purposes. Furthermore, the analysis shows that the presented in-house simulation environment is flexible and practical tool for extensive Network-on-Chip architecture analysis.
Resumo:
-
Resumo:
Abstract—Concept development and experimentation (CD&E) plays an important role in driving strategic transformation in the military community. Defence architecture frameworks, such as the NATO architecture framework, are considered excellent means to support CD&E. There is not much empirical evidence, however, to indicate how enterprise architectures (EA) are applied in the military community or particularly in military CD&E. Consequently, this paper describes and discusses empirical application of the EA approach in CD&E. The research method in the paper is a case study. Situational method engineering (SiME) is used as a framework to adapt the EA approach to the case project of the paper. The findings of the paper suggest that the EA is applicable to CD&E work, although all aspects of the original concept could not be expressed in the EA model of the case project. The results also show that the SiME method can support in applying the EA framework to the CD&E in the case project.
Resumo:
The political environment of security and defence has changed radically in the Western industrialised world since the Cold War. As a response to these changes, since the beginning of the twenty-first century, most Western countries have adopted a ‘capabilities-based approach’ to developing and operating their armed forces. More responsive and versatile military capabilities must be developed to meet the contemporary challenges. The systems approach is seen as a beneficial means of overcoming traps in resolving complex real -world issues by conventional thinking. The main objectives of this dissertation are to explore and assess the means to enhance the development of military capabilities both in concept development and experimentation (CD&E) and in national defence materiel collaboration issues. This research provides a unique perspective, a systems approach, to the development areas of concern in resolving complex real-world issues. This dissertation seeks to increase the understanding of the military capability concept both as a whole and with in its life cycle. The dissertation follows the generic functionalist systems methodology by Jackson. The methodology applies a comprehensive set of constitutive rules to examine the research objectives. This dissertation makes contribution to current studies about military capability. It presents two interdepen dent conceptual capability models: the comprehensive capability meta-model (CCMM) and the holistic capability life cycle model (HCLCM). These models holistically and systematically complement the existing, but still evolving, understanding of military capability and its life cycle. In addition, this dissertation contributes to the scientific discussion of defence procurement in its broad meaning by introducing the holistic model about the national defence materiel collaboration between the defence forces, defence industry and academia. The model connects the key collaborative mechanisms, which currently work in isolation from each other, and take into consideration the unique needs of each partner. This dissertation contributes empirical evidence regarding the benefits of enterprise architectures (EA) to CD&E. The EA approach may add value to traditional concept development by increasing the clarity, consistency and completeness of the concept. The most important use considered for EA in CD&E is that it enables further utilisation of the concept created in the case project.
Resumo:
This doctoral dissertation investigates the adult education policy of the European Union (EU) in the framework of the Lisbon agenda 2000–2010, with a particular focus on the changes of policy orientation that occurred during this reference decade. The year 2006 can be considered, in fact, a turning point for the EU policy-making in the adult learning sector: a radical shift from a wide--ranging and comprehensive conception of educating adults towards a vocationally oriented understanding of this field and policy area has been observed, in particular in the second half of the so--called ‘Lisbon decade’. In this light, one of the principal objectives of the mainstream policy set by the Lisbon Strategy, that of fostering all forms of participation of adults in lifelong learning paths, appears to have muted its political background and vision in a very short period of time, reflecting an underlying polarisation and progressive transformation of European policy orientations. Hence, by means of content analysis and process tracing, it is shown that the new target of the EU adult education policy, in this framework, has shifted from citizens to workers, and the competence development model, borrowed from the corporate sector, has been established as the reference for the new policy road maps. This study draws on the theory of governance architectures and applies a post-ontological perspective to discuss whether the above trends are intrinsically due to the nature of the Lisbon Strategy, which encompasses education policies, and to what extent supranational actors and phenomena such as globalisation influence the European governance and decision--making. Moreover, it is shown that the way in which the EU is shaping the upgrading of skills and competences of adult learners is modeled around the needs of the ‘knowledge economy’, thus according a great deal of importance to the ‘new skills for new jobs’ and perhaps not enough to life skills in its broader sense which include, for example, social and civic competences: these are actually often promoted but rarely implemented in depth in the EU policy documents. In this framework, it is conveyed how different EU policy areas are intertwined and interrelated with global phenomena, and it is emphasised how far the building of the EU education systems should play a crucial role in the formation of critical thinking, civic competences and skills for a sustainable democratic citizenship, from which a truly cohesive and inclusive society fundamentally depend, and a model of environmental and cosmopolitan adult education is proposed in order to address the challenges of the new millennium. In conclusion, an appraisal of the EU’s public policy, along with some personal thoughts on how progress might be pursued and actualised, is outlined.
Resumo:
A parallel pseudo-spectral method for the simulation in distributed memory computers of the shallow-water equations in primitive form was developed and used on the study of turbulent shallow-waters LES models for orographic subgrid-scale perturbations. The main characteristics of the code are: momentum equations integrated in time using an accurate pseudo-spectral technique; Eulerian treatment of advective terms; and parallelization of the code based on a domain decomposition technique. The parallel pseudo-spectral code is efficient on various architectures. It gives high performance onvector computers and good speedup on distributed memory systems. The code is being used for the study of the interaction mechanisms in shallow-water ows with regular as well as random orography with a prescribed spectrum of elevations. Simulations show the evolution of small scale vortical motions from the interaction of the large scale flow and the small-scale orographic perturbations. These interactions transfer energy from the large-scale motions to the small (usually unresolved) scales. The possibility of including the parametrization of this effects in turbulent LES subgrid-stress models for the shallow-water equations is addressed.