903 resultados para Co-Design


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Design is being performed on an ever-increasing spectrum of complex practices arising in response to emerging markets and technologies, co-design, digital interaction, service design and cultures of innovation. This emerging notion of design has led to an expansive array of collaborative and facilitation skills to demonstrate and share how such methods can shape innovation. The meaning of these design things in practice can't be taken for granted as matters of fact, which raises a key challenge for design to represent its role through the contradictory nature of matters of concern. This paper explores an innovative, object-oriented approach within the field of design research, visually combining an actor-network theory framework with situational analysis, to report on the role of design for fledgling companies in Scotland, established and funded through the knowledge exchange hub Design in Action (DiA). Key findings and visual maps are presented from reflective discussions with actors from a selection of the businesses within DiA's portfolio. The suggestion is that any notions of strategic value, of engendering meaningful change, of sharing the vision of design, through design things, should be grounded in the reflexive interpretations of matters of concern that emerge.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Designing for users rather than with users is still a common practice in technology design and innovation as opposed to taking them on board in the process. Design for inclusion aims to define and understand end-users, their needs, context of use, and, by doing so, ensure that end-users are catered for and included, while the results are geared towards universality of use. We describe the central role of end-user and designer participation, immersion and perspective to build user-driven solutions. These approaches provided a critical understanding of the counterpart role. Designer(s) could understand what the user’s needs were, experience physical impairments, and see from other’s perspective the interaction with the environment. Users could understand challenges of designing for physical impairments, build a sense of ownership with technology and explore it from a creative perspective. The understanding of the peer’s role (user and designer), needs and perspective enhanced user participation and inclusion.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

As the semiconductor industry struggles to maintain its momentum down the path following the Moore's Law, three dimensional integrated circuit (3D IC) technology has emerged as a promising solution to achieve higher integration density, better performance, and lower power consumption. However, despite its significant improvement in electrical performance, 3D IC presents several serious physical design challenges. In this dissertation, we investigate physical design methodologies for 3D ICs with primary focus on two areas: low power 3D clock tree design, and reliability degradation modeling and management. Clock trees are essential parts for digital system which dissipate a large amount of power due to high capacitive loads. The majority of existing 3D clock tree designs focus on minimizing the total wire length, which produces sub-optimal results for power optimization. In this dissertation, we formulate a 3D clock tree design flow which directly optimizes for clock power. Besides, we also investigate the design methodology for clock gating a 3D clock tree, which uses shutdown gates to selectively turn off unnecessary clock activities. Different from the common assumption in 2D ICs that shutdown gates are cheap thus can be applied at every clock node, shutdown gates in 3D ICs introduce additional control TSVs, which compete with clock TSVs for placement resources. We explore the design methodologies to produce the optimal allocation and placement for clock and control TSVs so that the clock power is minimized. We show that the proposed synthesis flow saves significant clock power while accounting for available TSV placement area. Vertical integration also brings new reliability challenges including TSV's electromigration (EM) and several other reliability loss mechanisms caused by TSV-induced stress. These reliability loss models involve complex inter-dependencies between electrical and thermal conditions, which have not been investigated in the past. In this dissertation we set up an electrical/thermal/reliability co-simulation framework to capture the transient of reliability loss in 3D ICs. We further derive and validate an analytical reliability objective function that can be integrated into the 3D placement design flow. The reliability aware placement scheme enables co-design and co-optimization of both the electrical and reliability property, thus improves both the circuit's performance and its lifetime. Our electrical/reliability co-design scheme avoids unnecessary design cycles or application of ad-hoc fixes that lead to sub-optimal performance. Vertical integration also enables stacking DRAM on top of CPU, providing high bandwidth and short latency. However, non-uniform voltage fluctuation and local thermal hotspot in CPU layers are coupled into DRAM layers, causing a non-uniform bit-cell leakage (thereby bit flip) distribution. We propose a performance-power-resilience simulation framework to capture DRAM soft error in 3D multi-core CPU systems. In addition, a dynamic resilience management (DRM) scheme is investigated, which adaptively tunes CPU's operating points to adjust DRAM's voltage noise and thermal condition during runtime. The DRM uses dynamic frequency scaling to achieve a resilience borrow-in strategy, which effectively enhances DRAM's resilience without sacrificing performance. The proposed physical design methodologies should act as important building blocks for 3D ICs and push 3D ICs toward mainstream acceptance in the near future.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Integrated circuit scaling has enabled a huge growth in processing capability, which necessitates a corresponding increase in inter-chip communication bandwidth. As bandwidth requirements for chip-to-chip interconnection scale, deficiencies of electrical channels become more apparent. Optical links present a viable alternative due to their low frequency-dependent loss and higher bandwidth density in the form of wavelength division multiplexing. As integrated photonics and bonding technologies are maturing, commercialization of hybrid-integrated optical links are becoming a reality. Increasing silicon integration leads to better performance in optical links but necessitates a corresponding co-design strategy in both electronics and photonics. In this light, holistic design of high-speed optical links with an in-depth understanding of photonics and state-of-the-art electronics brings their performance to unprecedented levels. This thesis presents developments in high-speed optical links by co-designing and co-integrating the primary elements of an optical link: receiver, transmitter, and clocking.

In the first part of this thesis a 3D-integrated CMOS/Silicon-photonic receiver will be presented. The electronic chip features a novel design that employs a low-bandwidth TIA front-end, double-sampling and equalization through dynamic offset modulation. Measured results show -14.9dBm of sensitivity and energy efficiency of 170fJ/b at 25Gb/s. The same receiver front-end is also used to implement source-synchronous 4-channel WDM-based parallel optical receiver. Quadrature ILO-based clocking is employed for synchronization and a novel frequency-tracking method that exploits the dynamics of IL in a quadrature ring oscillator to increase the effective locking range. An adaptive body-biasing circuit is designed to maintain the per-bit-energy consumption constant across wide data-rates. The prototype measurements indicate a record-low power consumption of 153fJ/b at 32Gb/s. The receiver sensitivity is measured to be -8.8dBm at 32Gb/s.

Next, on the optical transmitter side, three new techniques will be presented. First one is a differential ring modulator that breaks the optical bandwidth/quality factor trade-off known to limit the speed of high-Q ring modulators. This structure maintains a constant energy in the ring to avoid pattern-dependent power droop. As a first proof of concept, a prototype has been fabricated and measured up to 10Gb/s. The second technique is thermal stabilization of micro-ring resonator modulators through direct measurement of temperature using a monolithic PTAT temperature sensor. The measured temperature is used in a feedback loop to adjust the thermal tuner of the ring. A prototype is fabricated and a closed-loop feedback system is demonstrated to operate at 20Gb/s in the presence of temperature fluctuations. The third technique is a switched-capacitor based pre-emphasis technique designed to extend the inherently low bandwidth of carrier injection micro-ring modulators. A measured prototype of the optical transmitter achieves energy efficiency of 342fJ/bit at 10Gb/s and the wavelength stabilization circuit based on the monolithic PTAT sensor consumes 0.29mW.

Lastly, a first-order frequency synthesizer that is suitable for high-speed on-chip clock generation will be discussed. The proposed design features an architecture combining an LC quadrature VCO, two sample-and-holds, a PI, digital coarse-tuning, and rotational frequency detection for fine-tuning. In addition to an electrical reference clock, as an extra feature, the prototype chip is capable of receiving a low jitter optical reference clock generated by a high-repetition-rate mode-locked laser. The output clock at 8GHz has an integrated RMS jitter of 490fs, peak-to-peak periodic jitter of 2.06ps, and total RMS jitter of 680fs. The reference spurs are measured to be –64.3dB below the carrier frequency. At 8GHz the system consumes 2.49mW from a 1V supply.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Combinatorial optimization is a complex engineering subject. Although formulation often depends on the nature of problems that differs from their setup, design, constraints, and implications, establishing a unifying framework is essential. This dissertation investigates the unique features of three important optimization problems that can span from small-scale design automation to large-scale power system planning: (1) Feeder remote terminal unit (FRTU) planning strategy by considering the cybersecurity of secondary distribution network in electrical distribution grid, (2) physical-level synthesis for microfluidic lab-on-a-chip, and (3) discrete gate sizing in very-large-scale integration (VLSI) circuit. First, an optimization technique by cross entropy is proposed to handle FRTU deployment in primary network considering cybersecurity of secondary distribution network. While it is constrained by monetary budget on the number of deployed FRTUs, the proposed algorithm identi?es pivotal locations of a distribution feeder to install the FRTUs in different time horizons. Then, multi-scale optimization techniques are proposed for digital micro?uidic lab-on-a-chip physical level synthesis. The proposed techniques handle the variation-aware lab-on-a-chip placement and routing co-design while satisfying all constraints, and considering contamination and defect. Last, the first fully polynomial time approximation scheme (FPTAS) is proposed for the delay driven discrete gate sizing problem, which explores the theoretical view since the existing works are heuristics with no performance guarantee. The intellectual contribution of the proposed methods establishes a novel paradigm bridging the gaps between professional communities.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Today, modern System-on-a-Chip (SoC) systems have grown rapidly due to the increased processing power, while maintaining the size of the hardware circuit. The number of transistors on a chip continues to increase, but current SoC designs may not be able to exploit the potential performance, especially with energy consumption and chip area becoming two major concerns. Traditional SoC designs usually separate software and hardware. Thus, the process of improving the system performance is a complicated task for both software and hardware designers. The aim of this research is to develop hardware acceleration workflow for software applications. Thus, system performance can be improved with constraints of energy consumption and on-chip resource costs. The characteristics of software applications can be identified by using profiling tools. Hardware acceleration can have significant performance improvement for highly mathematical calculations or repeated functions. The performance of SoC systems can then be improved, if the hardware acceleration method is used to accelerate the element that incurs performance overheads. The concepts mentioned in this study can be easily applied to a variety of sophisticated software applications. The contributions of SoC-based hardware acceleration in the hardware-software co-design platform include the following: (1) Software profiling methods are applied to H.264 Coder-Decoder (CODEC) core. The hotspot function of aimed application is identified by using critical attributes such as cycles per loop, loop rounds, etc. (2) Hardware acceleration method based on Field-Programmable Gate Array (FPGA) is used to resolve system bottlenecks and improve system performance. The identified hotspot function is then converted to a hardware accelerator and mapped onto the hardware platform. Two types of hardware acceleration methods – central bus design and co-processor design, are implemented for comparison in the proposed architecture. (3) System specifications, such as performance, energy consumption, and resource costs, are measured and analyzed. The trade-off of these three factors is compared and balanced. Different hardware accelerators are implemented and evaluated based on system requirements. 4) The system verification platform is designed based on Integrated Circuit (IC) workflow. Hardware optimization techniques are used for higher performance and less resource costs. Experimental results show that the proposed hardware acceleration workflow for software applications is an efficient technique. The system can reach 2.8X performance improvements and save 31.84% energy consumption by applying the Bus-IP design. The Co-processor design can have 7.9X performance and save 75.85% energy consumption.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Discrete event-driven simulations of digital communication networks have been used widely. However, it is difficult to use a network simulator to simulate a hybrid system in which some objects are not discrete event-driven but are continuous time-driven. A networked control system (NCS) is such an application, in which physical process dynamics are continuous by nature. We have designed and implemented a hybrid simulation environment which effectively integrates models of continuous-time plant processes and discrete-event communication networks by extending the open source network simulator NS-2. To do this a synchronisation mechanism was developed to connect a continuous plant simulation with a discrete network simulation. Furthermore, for evaluating co-design approaches in an NCS environment, a piggybacking method was adopted to allow the control period to be adjusted during simulations. The effectiveness of the technique is demonstrated through case studies which simulate a networked control scenario in which the communication and control system properties are defined explicitly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Networked control systems (NCSs) offer many advantages over conventional control; however, they also demonstrate challenging problems such as network-induced delay and packet losses. This paper proposes an approach of predictive compensation for simultaneous network-induced delays and packet losses. Different from the majority of existing NCS control methods, the proposed approach addresses co-design of both network and controller. It also alleviates the requirements of precise process models and full understanding of NCS network dynamics. For a series of possible sensor-to-actuator delays, the controller computes a series of corresponding redundant control values. Then, it sends out those control values in a single packet to the actuator. Once receiving the control packet, the actuator measures the actual sensor-to-actuator delay and computes the control signals from the control packet. When packet dropout occurs, the actuator utilizes past control packets to generate an appropriate control signal. The effectiveness of the approach is demonstrated through examples.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sense making through conversation plays a key role in channelling and furthering participatory business model innovation. The designer as facilitator, with conversation as a core tool, is an emerging area of interest within the design research literature. This paper will discuss preliminary findings of a case study of Second Road, a strategy and innovation consultancy that employed a design thinking approach and conversational methods to redesign a client’s business development model. Through this study conversation based co-creation emerged as the primary method for participatory innovation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With approximately half of Australian university teaching now performed by sessional academics, there has been growing recognition of the contribution they make to student learning. At the same time, sector-wide research and institutional audits continue to raise concerns about academic development, quality assurance, recognition and belonging. In response, universities have increasingly begun to offer academic development programs for sessional academics. However, such programs may be centrally delivered, generic in nature, and contained within the moment of delivery, while the Faculty contexts and cultures that sessional academics work within are diverse, and the need for support unfolds in ad-hoc and often unpredictable ways. In this paper we present the Sessional Academic Success (SAS) program–a new framework that complements and extends the central academic development program for sessional academic staff at Queensland University of Technology. This program recognises that experienced sessional academics have much to contribute to the advancement of learning and teaching, and harnesses their expertise to provide school-based academic development opportunities, peer-to-peer support, and locally contextualized community building. We describe the program’s implementation and explain how Sessional Academic Success Advisors (SASAs) are employed, trained and supported to provide advice and mentorship and, through a co-design methodology, to develop local development opportunities and communities of teaching practice within their schools. Besides anticipated benefits to new sessional academics in terms of timely and contextual support and improved sense of belonging, we explain how SAS provides a pathway for building leadership capacity and academic advancement for experienced sessional academics. We take a collaborative, dialogic and reflective practice approach to this paper, interlacing insights from the Associate Director, Academic: Sessional Development who designed the program, and two Sessional Academic Success Advisors who have piloted it within their schools.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Rove n Rave ™ is a website designed and created for, and with, people with an intellectual disability. Its aim is to provide them with a user-friendly online platform where they can share opinions and experiences, and where they can find reviews which will help them to choose a place to visit themselves. During the development process, input on design requirements was gathered from a group of people with an intellectual disability and the disability service provider. This group then tested the product and provided further feedback on improving the website. It was found that the choice of wording, icons, pictures, colours and some functions significantly affected the users' ability to understand the content of the website. This demonstrated that a partnership between the developer and the user is essential when designing and delivering products or services for people with an intellectual disability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The future of civic engagement is characterised by both technological innovation as well as new technological user practices that are fuelled by trends towards mobile, personal devices; broadband connectivity; open data; urban interfaces; and, cloud computing. These technology trends are progressing at a rapid pace, and have led global technology vendors to package and sell the ‘Smart City’ as a centralized service delivery platform predicted to optimize and enhance cities’ key performance indicators – and generate a profitable market. The top-down deployment of these large and proprietary technology platforms have helped sectors such as energy, transport, and healthcare to increase efficiencies. However, an increasing number of scholars and commentators warn of another ‘IT bubble’ emerging. Along with some city leaders, they argue that the top-down approach does not fit the governance dynamics and values of a liberal democracy when applied across sectors. A thorough understanding is required, of the socio-cultural nuances of how people work, live, play across different environments, and how they employ social media and mobile devices to interact with, engage in, and constitute public realms. Although the term ‘slacktivism’ is sometimes used to denote a watered down version of civic engagement and activism that is reduced to clicking a ‘Like’ button and signing online petitions, we believe that we are far from witnessing another Biedermeier period that saw people focus on the domestic and the non-political. There is plenty of evidence to the contrary, such as post-election violence in Kenya in 2008, the Occupy movements in New York, Hong Kong and elsewhere, the Arab Spring, Stuttgart 21, Fukushima, the Taksim Gezi Park in Istanbul, and the Vinegar Movement in Brazil in 2013. These examples of civic action shape the dynamics of governments, and in turn, call for new processes to be incorporated into governance structures. Participatory research into these new processes across the triad of people, place and technology is a significant and timely investment to foster productive, sustainable, and livable human habitats. With this chapter, we want to reframe the current debates in academia and priorities in industry and government to allow citizens and civic actors to take their rightful centerpiece place in civic movements. This calls for new participatory approaches for co-inquiry and co-design. It is an evolving process with an explicit agenda to facilitate change, and we propose participatory action research (PAR) as an indispensable component in the journey to develop new governance infrastructures and practices for civic engagement. This chapter proposes participatory action research as a useful and fitting research paradigm to guide methodological considerations surrounding the study, design, development, and evaluation of civic technologies. We do not limit our definition of civic technologies to tools specifically designed to simply enhance government and governance, such as renewing your car registration online or casting your vote electronically on election day. Rather, we are interested in civic media and technologies that foster citizen engagement in the widest sense, and particularly the participatory design of such civic technologies that strive to involve citizens in political debate and action as well as question conventional approaches to political issues (DiSalvo, 2012; Dourish, 2010; Foth et al., 2013). Following an outline of some underlying principles and assumptions behind participatory action research, especially as it applies to cities, we will critically review case studies to illustrate the application of this approach with a view to engender robust, inclusive, and dynamic societies built on the principles of engaged liberal democracy. The rationale for this approach is an alternative to smart cities in a ‘perpetual tomorrow,’ (cf. e.g. Dourish & Bell, 2011), based on many weak and strong signals of civic actions revolving around technology seen today. It seeks to emphasize and direct attention to active citizenry over passive consumerism, human actors over human factors, culture over infrastructure, and prosperity over efficiency. First, we will have a look at some fundamental issues arising from applying simplistic smart city visions to the kind of a problem a city is (cf. Jacobs, 1961). We focus on the touch points between “the city” and its civic body, the citizens. In order to provide for meaningful civic engagement, the city must provide appropriate interfaces.