889 resultados para Design Principles
Resumo:
Widespread approaches to fabricate surfaces with robust micro- and nanostructured topographies have been stimulated by opportunities to enhance interface performance by combining physical and chemical effects. In particular, arrays of asymmetric surface features, such as arrays of grooves, inclined pillars, and helical protrusions, have been shown to impart unique anisotropy in properties including wetting, adhesion, thermal and/or electrical conductivity, optical activity, and capability to direct cell growth. These properties are of wide interest for applications including energy conversion, microelectronics, chemical and biological sensing, and bioengineering. However, fabrication of asymmetric surface features often pushes the limits of traditional etching and deposition techniques, making it challenging to produce the desired surfaces in a scalable and cost-effective manner. We review and classify approaches to fabricate arrays of asymmetric 2D and 3D surface features, in polymers, metals, and ceramics. Analytical and empirical relationships among geometries, materials, and surface properties are discussed, especially in the context of the applications mentioned above. Further, opportunities for new fabrication methods that combine lithography with principles of self-assembly are identified, aiming to establish design principles for fabrication of arbitrary 3D surface textures over large areas. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
This paper explores a design strategy of hopping robots, which makes use of free vibration of an elastic curved beam. In this strategy, the leg structure consists of a specifically shaped elastic curved beam and a small rotating mass that induces free vibration of the entire robot body. Although we expect to improve energy efficiency of locomotion by exploiting the mechanical dynamics, it is not trivial to take advantage of the coupled dynamics between actuation and mechanical structures for the purpose of locomotion. From this perspective, this paper explains the basic design principles through modeling, simulation, and experiments of a minimalistic hopping robot platform. More specifically, we show how to design elastic curved beams for stable hopping locomotion and the control method by using unconventional actuation. In addition, we also analyze the proposed design strategy in terms of energy efficiency and discuss how it can be applied to the other forms of legged robot locomotion. © 1996-2012 IEEE.
Resumo:
Locomotion has been one of the frequently used case studies in hands-on curricula in robotics education. Students are usually instructed to construct their own wheeled or legged robots from modular robot kits. In the development process of a robot students tend to emphasize on the programming part and consequently, neglect the design of the robot's body. However, the morphology of a robot (i.e. its body shape and material properties) plays an important role especially in dynamic tasks such as locomotion. In this paper we introduce a case study of a tutorial on soft-robotics where students were encouraged to focus solely on the morphology of a robot to achieve stable and fast locomotion. The students should experience the influence material properties exert on the performance of a robot and consequently, extract design principles. This tutorial was held in the context of the 2012 Summer School on Soft Robotics at ETH Zurich, which was one of the world's first courses specialized in the emerging field. We describe the tutorial set-up, the used hardware and software, the students assessment criteria as well as the results. Based on the high creativity and diversity of the robots built by the students, we conclude that the concept of this tutorial has great potentials for both education and research. © 2013 IEEE.
Resumo:
There has been an increasing interest in applying biological principles to the design and control of robots. Unlike industrial robots that are programmed to execute a rather limited number of tasks, the new generation of bio-inspired robots is expected to display a wide range of behaviours in unpredictable environments, as well as to interact safely and smoothly with human co-workers. In this article, we put forward some of the properties that will characterize these new robots: soft materials, flexible and stretchable sensors, modular and efficient actuators, self-organization and distributed control. We introduce a number of design principles; in particular, we try to comprehend the novel design space that now includes soft materials and requires a completely different way of thinking about control. We also introduce a recent case study of developing a complex humanoid robot, discuss the lessons learned and speculate about future challenges and perspectives.
Resumo:
This study presents a novel approach to the design of low-cost and energy-efficient hopping robots, which makes use of free vibration of an elastic curved beam. We found that a hopping robot could benefit from an elastic curved beam in many ways such as low manufacturing cost, light body weight and small energy dissipation in mechanical interactions. A challenging problem of this design strategy, however, lies in harnessing the mechanical dynamics of free vibration in the elastic curved beam: because the free vibration is the outcome of coupled mechanical dynamics between actuation and mechanical structures, it is not trivial to systematically design mechanical structures and control architectures for stable locomotion. From this perspective, this paper investigates a case study of simple hopping robot to identify the design principles of mechanics and control. We developed a hopping robot consisting of an elastic curved beam and a small rotating mass, which was then modeled and analyzed in simulation. The experimental results show that the robot is capable of exhibiting stable hopping gait patterns by using a small actuation with no sensory feedback owing to the intrinsic stability of coupled mechanical dynamics. Furthermore, an additional analysis shows that, by exploiting free vibration of the elastic curved beam, cost of transport of the proposed hopping locomotion can be in the same rage of animals' locomotion including human running. © 2011 IEEE.
Resumo:
Competition dialysis was used to study the interactions of 13 substituted aromatic diamidine compounds with 13 nucleic acid structures and sequences. The results show a striking selectivity of these compounds for the triplex structure poly dA:(poly dT)(2), a novel aspect of their interaction with nucleic acids not previously described. The triplex selectivity of selected compounds was confirmed by thermal denaturation studies. Triplex selectivity was found to be modulated by the location of amidine substiuents on the core phenyl-furan-phenyl ring scaffold. Molecular models were constructed to rationalize the triplex selectivity of DB359, the most selective compound in the series. Its triplex selectivity was found to arise from optimal ring stacking on base triplets, along with proper positioning of its amidine substituents to occupy the minor and the major-minor grooves of the triplex. New insights into the molecular recognition of nucleic acid structures emerged from these studies, adding to the list of available design principles for selectively targeting DNA and RNA.
Resumo:
本文介绍了《天马》专家系统开发环境中窗口生成系统的设计原理及其实现,包括一组丰富实用的窗口操作函数及一个交互式窗口设计环境,方便了用户设计自己开发系统的人机界面。
Resumo:
China is experiencing a rapid development of highway ever since 1990s. By the end of 2004, the total length of the highway summed up to 33 thousand kilometers, ranking 2n in the world. After the open of highway, the accumulation of time and traffic causes the decrease of its capability. To ensure its good quality, security and operation functions, we should take some reasonable measures to maintain it periodically. At present, a big problem is that the traditional maintain measures can no longer meet the increasing requirements. Due to the characters of highway, the relationship of various maintenance data and geographic positions is even closer than any others. If we wan to improve the quality and efficiency of the maintenance work, particularly when there is need for decision-making, a great number of data that is related to geographic positions are absolutely necessary. Evidently, Geographical Information System (GIS) has incomparably advantages in dealing with these spatial information. As a result, a management system for highway maintenance work based on GIS became inevitable for the development of the maintenance of highway. The purpose of this paper is to establish a management system for highway maintenance work base on Geographical Information System (GIS), Global Positioning System (GPS) and spatial database, to manage all kinds of problems encountered in the work, and to provide support on information and methods. My study mainly includes: (1) Analysis on the current status of the maintenance and management work; overview on the history of domestic and international highway maintenance management systems; identifying the necessity and importance for establishing a management system for highway maintenance work based on GIS. (2) Based on the requirement analysis, I proposed a general design for this management system, and discussed the objective, design principles, framework, systematical structure and function design. (3) Outdoor data collection is not only a prime way to understand the current situation of the road, but also an important method for data update after the system is put into use. This paper also intends to establish a set of plan to collect data efficiently and precisely which is based on GIS and GPS technologies. (4) The maintenance management database is a supporting platform for various maintenance decision-makings. Such decisions need the support of a great amount of data, which would cause other problems, such as the diversity of the data source, difference of data formats. This paper also discussed how to deal with these problems and establish such a database. (5) To propose an approach to assess the condition of pavement, based on GIS and related maintenance models. Among all the maintenance models, the two for assessing and forecasting pavement condition are the most important and mature. This paper also analyzed these two models and introduced them in terms of the integration of models. (6) This paper took the Guangshen Highway for example, explaining how to realize a GIS for management of highway maintenance work.
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
Effective engineering of the Internet is predicated upon a detailed understanding of issues such as the large-scale structure of its underlying physical topology, the manner in which it evolves over time, and the way in which its constituent components contribute to its overall function. Unfortunately, developing a deep understanding of these issues has proven to be a challenging task, since it in turn involves solving difficult problems such as mapping the actual topology, characterizing it, and developing models that capture its emergent behavior. Consequently, even though there are a number of topology models, it is an open question as to how representative the topologies they generate are of the actual Internet. Our goal is to produce a topology generation framework which improves the state of the art and is based on design principles which include representativeness, inclusiveness, and interoperability. Representativeness leads to synthetic topologies that accurately reflect many aspects of the actual Internet topology (e.g. hierarchical structure, degree distribution, etc.). Inclusiveness combines the strengths of as many generation models as possible in a single generation tool. Interoperability provides interfaces to widely-used simulation and visualization applications such as ns and SSF. We call such a tool a universal topology generator. In this paper we discuss the design, implementation and usage of the BRITE universal topology generation tool that we have built. We also describe the BRITE Analysis Engine, BRIANA, which is an independent piece of software designed and built upon BRITE design goals of flexibility and extensibility. The purpose of BRIANA is to act as a repository of analysis routines along with a user–friendly interface that allows its use on different topology formats.
Resumo:
We discuss the design principles of TCP within the context of heterogeneous wired/wireless networks and mobile networking. We identify three shortcomings in TCP's behavior: (i) the protocol's error detection mechanism, which does not distinguish different types of errors and thus does not suffice for heterogeneous wired/wireless environments, (ii) the error recovery, which is not responsive to the distinctive characteristics of wireless networks such as transient or burst errors due to handoffs and fading channels, and (iii) the protocol strategy, which does not control the tradeoff between performance measures such as goodput and energy consumption, and often entails a wasteful effort of retransmission and energy expenditure. We discuss a solution-framework based on selected research proposals and the associated evaluation criteria for the suggested modifications. We highlight an important angle that did not attract the required attention so far: the need for new performance metrics, appropriate for evaluating the impact of protocol strategies on battery-powered devices.
Resumo:
The pervasiveness of personal computing platforms offers an unprecedented opportunity to deploy large-scale services that are distributed over wide physical spaces. Two major challenges face the deployment of such services: the often resource-limited nature of these platforms, and the necessity of preserving the autonomy of the owner of these devices. These challenges preclude using centralized control and preclude considering services that are subject to performance guarantees. To that end, this thesis advances a number of new distributed resource management techniques that are shown to be effective in such settings, focusing on two application domains: distributed Field Monitoring Applications (FMAs), and Message Delivery Applications (MDAs). In the context of FMA, this thesis presents two techniques that are well-suited to the fairly limited storage and power resources of autonomously mobile sensor nodes. The first technique relies on amorphous placement of sensory data through the use of novel storage management and sample diffusion techniques. The second approach relies on an information-theoretic framework to optimize local resource management decisions. Both approaches are proactive in that they aim to provide nodes with a view of the monitored field that reflects the characteristics of queries over that field, enabling them to handle more queries locally, and thus reduce communication overheads. Then, this thesis recognizes node mobility as a resource to be leveraged, and in that respect proposes novel mobility coordination techniques for FMAs and MDAs. Assuming that node mobility is governed by a spatio-temporal schedule featuring some slack, this thesis presents novel algorithms of various computational complexities to orchestrate the use of this slack to improve the performance of supported applications. The findings in this thesis, which are supported by analysis and extensive simulations, highlight the importance of two general design principles for distributed systems. First, a-priori knowledge (e.g., about the target phenomena of FMAs and/or the workload of either FMAs or DMAs) could be used effectively for local resource management. Second, judicious leverage and coordination of node mobility could lead to significant performance gains for distributed applications deployed over resource-impoverished infrastructures.
Resumo:
CONFIGR (CONtour FIgure GRound) is a computational model based on principles of biological vision that completes sparse and noisy image figures. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting, and typically incomplete, figure is fed back to the “early vision” stage for long-range completion via filling-in. The reconstructed image is then re-presented to the recognition system for global functions such as object recognition. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel, whose size defines a computational spatial scale. Once pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices. Multi-scale simulations illustrate the vision/recognition system. Open-source CONFIGR code is available online, but all examples can be derived analytically, and the design principles applied at each step are transparent. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Lobe computations occur on a subpixel spatial scale. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects and segments sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.
Resumo:
In a constantly changing world, humans are adapted to alternate routinely between attending to familiar objects and testing hypotheses about novel ones. We can rapidly learn to recognize and narne novel objects without unselectively disrupting our memories of familiar ones. We can notice fine details that differentiate nearly identical objects and generalize across broad classes of dissimilar objects. This chapter describes a class of self-organizing neural network architectures--called ARTMAP-- that are capable of fast, yet stable, on-line recognition learning, hypothesis testing, and naming in response to an arbitrary stream of input patterns (Carpenter, Grossberg, Markuzon, Reynolds, and Rosen, 1992; Carpenter, Grossberg, and Reynolds, 1991). The intrinsic stability of ARTMAP allows the system to learn incrementally for an unlimited period of time. System stability properties can be traced to the structure of its learned memories, which encode clusters of attended features into its recognition categories, rather than slow averages of category inputs. The level of detail in the learned attentional focus is determined moment-by-moment, depending on predictive success: an error due to over-generalization automatically focuses attention on additional input details enough of which are learned in a new recognition category so that the predictive error will not be repeated. An ARTMAP system creates an evolving map between a variable number of learned categories that compress one feature space (e.g., visual features) to learned categories of another feature space (e.g., auditory features). Input vectors can be either binary or analog. Computational properties of the networks enable them to perform significantly better in benchmark studies than alternative machine learning, genetic algorithm, or neural network models. Some of the critical problems that challenge and constrain any such autonomous learning system will next be illustrated. Design principles that work together to solve these problems are then outlined. These principles are realized in the ARTMAP architecture, which is specified as an algorithm. Finally, ARTMAP dynamics are illustrated by means of a series of benchmark simulations.
Resumo:
Process guidance supports users to increase their process model understanding, process execution effectiveness as well as efficiency, and process compliance performance. This paper presents a research in progress encompassing our ongoing DSR project on Process Guidance Systems and a field evaluation of the resulting artifact in cooperation with a company. Building on three theory-grounded design principles, a Process Guidance System artifact for the company’s IT service ticketing process is developed, deployed and used. Fol-lowing a multi-method approach, we plan to evaluate the artifact in a longitudinal field study. Thereby, we will not only gather self-reported but also real usage data. This article describes the development of the artifact and discusses an innovative evaluation approach.