972 resultados para space exploration
Resumo:
This paper presents a general, global approach to the problem of robot exploration, utilizing a topological data structure to guide an underlying Simultaneous Localization and Mapping (SLAM) process. A Gap Navigation Tree (GNT) is used to motivate global target selection and occluded regions of the environment (called “gaps”) are tracked probabilistically. The process of map construction and the motion of the vehicle alters both the shape and location of these regions. The use of online mapping is shown to reduce the difficulties in implementing the GNT.
Resumo:
Design Proposal for the Blue Lunar Support Hub The conceptual design of a space station is one of the most challenging tasks in aerospace engineering. The history of the space station Mir and the assembly of the International Space Station demonstrate that even within the assembly phase quick solutions have to be found to cope with budget and technical problems or changing objectives. This report is the outcome of the conceptual design of the Space Station Design Workshop (SSDW) 2007, which took place as an international design project from the 16th to the 21st of July 2007 at the Australian Centre for Field Robotics (ACFR), University of Sydney, Australia. The participants were tasked to design a human-tended space station in low lunar orbit (LLO) focusing on supporting future missions to the moon in a programmatic context of space exploration beyond low Earth orbit (LEO). The design included incorporating elements from systems engineering to interior architecture. The customised, intuitive, rapid-turnaround software tools enabled the team to successfully tackle the complex problem of conceptual design of crewed space systems. A strong emphasis was put on improving the integration of the human crew, as it is the major contributor to mission success, while always respecting the boundary conditions imposed by the challenging environment of space. This report documents the methodology, tools and outcomes of the Space Station Design Workshop during the SSDW 2007. The design results produced by Team Blue are presented.
Resumo:
Today's SoCs are complex designs with multiple embedded processors, memory subsystems, and application specific peripherals. The memory architecture of embedded SoCs strongly influences the power and performance of the entire system. Further, the memory subsystem constitutes a major part (typically up to 70%) of the silicon area for the current day SoC. In this article, we address the on-chip memory architecture exploration for DSP processors which are organized as multiple memory banks, where banks can be single/dual ported with non-uniform bank sizes. In this paper we propose two different methods for physical memory architecture exploration and identify the strengths and applicability of these methods in a systematic way. Both methods address the memory architecture exploration for a given target application by considering the application's data access characteristics and generates a set of Pareto-optimal design points that are interesting from a power, performance and VLSI area perspective. To the best of our knowledge, this is the first comprehensive work on memory space exploration at physical memory level that integrates data layout and memory exploration to address the system objectives from both hardware design and application software development perspective. Further we propose an automatic framework that explores the design space identifying 100's of Pareto-optimal design points within a few hours of running on a standard desktop configuration.
Resumo:
Past studies use deterministic models to evaluate optimal cache configuration or to explore its design space. However, with the increasing number of components present on a chip multiprocessor (CMP), deterministic approaches do not scale well. Hence, we apply probabilistic genetic algorithms (GA) to determine a near-optimal cache configuration for a sixteen tiled CMP. We propose and implement a faster trace based approach to estimate fitness of a chromosome. It shows up-to 218x simulation speedup over the cycle-accurate architectural simulation. Our methodology can be applied to solve other cache optimization problems such as design space exploration of cache and its partitioning among applications/ virtual machines.
Resumo:
The H.R. MacMillan Space Centre is a multi-faceted organization whose mission is to educate, inspire and evoke a sense of wonder about the universe, our planet and space exploration. As a popular, Vancouver science centre, it faces the same range of challenges and issues as other major attractions: how does the Space Centre maintain a healthy public attendance in an increasingly competitive market where visitors continue to be presented with an increasingly rich range of choices for their leisure spending and entertainment dollars?This front-end study investigated visitor attitudes, thoughts and preconceptions on the topic of space and astronomy. It also examined visitors’ motivations for coming to a space science centre. Useful insights were obtained which will be applied to improve future programme content and exhibit development.
Resumo:
We measured the elemental composition on a sample of Allende meteorite with a miniature laser ablation mass spectrometer. This Laser Mass Spectrometer (LMS) has been designed and built at the University of Bern in the Department of Space Research and Planetary Sciences with the objective of using such an instrument on a space mission. Utilising the meteorite Allende as the test sample in this study, it is demonstrated that the instrument allows the in situ determination of the elemental composition and thus mineralogy and petrology of untreated rocky samples, particularly on planetary surfaces. In total, 138 measurements of elemental compositions have been carried out on an Allende sample. The mass spectrometric data are evaluated and correlated with an optical image. It is demonstrated that by illustrating the measured elements in the form of mineralogical maps, LMS can serve as an element imaging instrument with a very high spatial resolution of µm scale. The detailed analysis also includes a mineralogical evaluation and an investigation of the volatile element content of Allende. All findings are in good agreement with published data and underline the high sensitivity, accuracy and capability of LMS as a mass analyser for space exploration.
Resumo:
‘Everybody is science conscious these days’ – so started the inaugural week of Frontiers of Science, a self described ‘intelligently presented and attractively drawn’ science-based comic strip published in the Sydney Morning Herald from 1961 to 1982 and ultimately syndicated to daily newspapers around the world. An archive of the first 200 Frontiers of Science comic strips (1961−65) has been made freely available online through an initiative of the University of Sydney Library. While the 1960s public interest in evolution, space exploration, and the Cold War have given way to the twenty-first century concerns about global warming, genetic engineering, and alternative energy sources, it is fair to say that everybody is still science conscious. Frontiers of Science provides an interesting and nostalgic insight into 1960s popular science through an unusual mode of dissemination.
Resumo:
Embedded many-core architectures contain dozens to hundreds of CPU cores that are connected via a highly scalable NoC interconnect. Our Multiprocessor-System-on-Chip CoreVAMPSoC combines the advantages of tightly coupled bus-based communication with the scalability of NoC approaches by adding a CPU cluster as an additional level of hierarchy. In this work, we analyze different cluster interconnect implementations with 8 to 32 CPUs and compare them in terms of resource requirements and performance to hierarchical NoCs approaches. Using 28nm FD-SOI technology the area requirement for 32 CPUs and AXI crossbar is 5.59mm2 including 23.61% for the interconnect at a clock frequency of 830 MHz. In comparison, a hierarchical MPSoC with 4 CPU cluster and 8 CPUs in each cluster requires only 4.83mm2 including 11.61% for the interconnect. To evaluate the performance, we use a compiler for streaming applications to map programs to the different MPSoC configurations. We use this approach for a design-space exploration to find the most efficient architecture and partitioning for an application.
Resumo:
REDEFINE is a reconfigurable SoC architecture that provides a unique platform for high performance and low power computing by exploiting the synergistic interaction between coarse grain dynamic dataflow model of computation (to expose abundant parallelism in applications) and runtime composition of efficient compute structures (on the reconfigurable computation resources). We propose and study the throttling of execution in REDEFINE to maximize the architecture efficiency. A feature specific fast hybrid (mixed level) simulation framework for early in design phase study is developed and implemented to make the huge design space exploration practical. We do performance modeling in terms of selection of important performance criteria, ranking of the explored throttling schemes and investigate effectiveness of the design space exploration using statistical hypothesis testing. We find throttling schemes which give appreciable (24.8%) overall performance gain in the architecture and 37% resource usage gain in the throttling unit simultaneously.
Resumo:
Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.
This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.
The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.
Resumo:
Compliant foams are usually characterized by a wide range of desirable mechanical properties. These properties include viscoelasticity at different temperatures, energy absorption, recoverability under cyclic loading, impact resistance, and thermal, electrical, acoustic and radiation-resistance. Some foams contain nano-sized features and are used in small-scale devices. This implies that the characteristic dimensions of foams span multiple length scales, rendering modeling their mechanical properties difficult. Continuum mechanics-based models capture some salient experimental features like the linear elastic regime, followed by non-linear plateau stress regime. However, they lack mesostructural physical details. This makes them incapable of accurately predicting local peaks in stress and strain distributions, which significantly affect the deformation paths. Atomistic methods are capable of capturing the physical origins of deformation at smaller scales, but suffer from impractical computational intensity. Capturing deformation at the so-called meso-scale, which is capable of describing the phenomenon at a continuum level, but with some physical insights, requires developing new theoretical approaches.
A fundamental question that motivates the modeling of foams is ‘how to extract the intrinsic material response from simple mechanical test data, such as stress vs. strain response?’ A 3D model was developed to simulate the mechanical response of foam-type materials. The novelty of this model includes unique features such as the hardening-softening-hardening material response, strain rate-dependence, and plastically compressible solids with plastic non-normality. Suggestive links from atomistic simulations of foams were borrowed to formulate a physically informed hardening material input function. Motivated by a model that qualitatively captured the response of foam-type vertically aligned carbon nanotube (VACNT) pillars under uniaxial compression [2011,“Analysis of Uniaxial Compression of Vertically Aligned Carbon Nanotubes,” J. Mech.Phys. Solids, 59, pp. 2227–2237, Erratum 60, 1753–1756 (2012)], the property space exploration was advanced to three types of simple mechanical tests: 1) uniaxial compression, 2) uniaxial tension, and 3) nanoindentation with a conical and a flat-punch tip. The simulations attempt to explain some of the salient features in experimental data, like
1) The initial linear elastic response.
2) One or more nonlinear instabilities, yielding, and hardening.
The model-inherent relationships between the material properties and the overall stress-strain behavior were validated against the available experimental data. The material properties include the gradient in stiffness along the height, plastic and elastic compressibility, and hardening. Each of these tests was evaluated in terms of their efficiency in extracting material properties. The uniaxial simulation results proved to be a combination of structural and material influences. Out of all deformation paths, flat-punch indentation proved to be superior since it is the most sensitive in capturing the material properties.
Resumo:
218p. -- Tesis con mención "Doctor europeus" realizada en el periodo de Octubre 2005-Mayo 2010, en el Grupo "Materiales+Tecnologías" (GMT).
Resumo:
A case study of an aircraft engine manufacturer is used to analyze the effects of management levers on the lead time and design errors generated in an iteration-intensive concurrent engineering process. The levers considered are amount of design-space exploration iteration, degree of process concurrency, and timing of design reviews. Simulation is used to show how the ideal combination of these levers can vary with changes in design problem complexity, which can increase, for instance, when novel technology is incorporated in a design. Results confirm that it is important to consider multiple iteration-influencing factors and their interdependencies to understand concurrent processes, because the factors can interact with confounding effects. The article also demonstrates a new approach to derive a system dynamics model from a process task network. The new approach could be applied to analyze other concurrent engineering scenarios. © The Author(s) 2012.
Resumo:
Modern Engineering Design involves the deployment of many computational tools. Re- search on challenging real-world design problems is focused on developing improvements for the engineering design process through the integration and application of advanced com- putational search/optimization and analysis tools. Successful application of these methods generates vast quantities of data on potential optimum designs. To gain maximum value from the optimization process, designers need to visualise and interpret this information leading to better understanding of the complex and multimodal relations between param- eters, objectives and decision-making of multiple and strongly conflicting criteria. Initial work by the authors has identified that the Parallel Coordinates interactive visualisation method has considerable potential in this regard. This methodology involves significant levels of user-interaction, making the engineering designer central to the process, rather than the passive recipient of a deluge of pre-formatted information. In the present work we have applied and demonstrated this methodology in two differ- ent aerodynamic turbomachinery design cases; a detailed 3D shape design for compressor blades, and a preliminary mean-line design for the whole compressor core. The first case comprises 26 design parameters for the parameterisation of the blade geometry, and we analysed the data produced from a three-objective optimization study, thus describing a design space with 29 dimensions. The latter case comprises 45 design parameters and two objective functions, hence developing a design space with 47 dimensions. In both cases the dimensionality can be managed quite easily in Parallel Coordinates space, and most importantly, we are able to identify interesting and crucial aspects of the relationships between the design parameters and optimum level of the objective functions under con- sideration. These findings guide the human designer to find answers to questions that could not even be addressed before. In this way, understanding the design leads to more intelligent decision-making and design space exploration. © 2012 AIAA.