988 resultados para Space exploration
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
The spatial dimension appears as a fundamental to board the urban employment generation topics. The space plays a key role on the location decisions made by the agents due to not all places offer the same levels of utility and profitability. The existence of a number of advantages in certain places such as the agglomeration, economies of scale, variety, and accessibility among others, may contribute to explain why and where urban employment is generated. From a theoretical model of preferences for variety, data analysis space exploration (ESDA) and spatial econometrics, it is shown how such spatial advantages can affect the employment generation using Bogot´a as a casestudy.
Resumo:
Current and planned robotic rovers for space exploration are focused on science and correspondingly carry a science payload. Future missions will need robotic rovers that can demonstrate a wider range of functionality. This paper proposes an approach to offering this greater functionality by employing science and/or tool packs aboard a highly mobile robotic chassis. The packs are interchangeable and each contains different instruments or tools. The appropriate selection of science and/or tool packs enables the robot to perform a great variety of tasks either alone or in cooperation with other robots. The multi-tasking rover (MTR), thus conceived, provides a novel method for high return on investment. This paper describes the mobility system of the MTR and reports on initial experimental evaluation of the robotic chassis.
Resumo:
Electronic applications are currently developed under the reuse-based paradigm. This design methodology presents several advantages for the reduction of the design complexity, but brings new challenges for the test of the final circuit. The access to embedded cores, the integration of several test methods, and the optimization of the several cost factors are just a few of the several problems that need to be tackled during test planning. Within this context, this thesis proposes two test planning approaches that aim at reducing the test costs of a core-based system by means of hardware reuse and integration of the test planning into the design flow. The first approach considers systems whose cores are connected directly or through a functional bus. The test planning method consists of a comprehensive model that includes the definition of a multi-mode access mechanism inside the chip and a search algorithm for the exploration of the design space. The access mechanism model considers the reuse of functional connections as well as partial test buses, cores transparency, and other bypass modes. The test schedule is defined in conjunction with the access mechanism so that good trade-offs among the costs of pins, area, and test time can be sought. Furthermore, system power constraints are also considered. This expansion of concerns makes it possible an efficient, yet fine-grained search, in the huge design space of a reuse-based environment. Experimental results clearly show the variety of trade-offs that can be explored using the proposed model, and its effectiveness on optimizing the system test plan. Networks-on-chip are likely to become the main communication platform of systemson- chip. Thus, the second approach presented in this work proposes the reuse of the on-chip network for the test of the cores embedded into the systems that use this communication platform. A power-aware test scheduling algorithm aiming at exploiting the network characteristics to minimize the system test time is presented. The reuse strategy is evaluated considering a number of system configurations, such as different positions of the cores in the network, power consumption constraints and number of interfaces with the tester. Experimental results show that the parallelization capability of the network can be exploited to reduce the system test time, whereas area and pin overhead are strongly minimized. In this manuscript, the main problems of the test of core-based systems are firstly identified and the current solutions are discussed. The problems being tackled by this thesis are then listed and the test planning approaches are detailed. Both test planning techniques are validated for the recently released ITC’02 SoC Test Benchmarks, and further compared to other test planning methods of the literature. This comparison confirms the efficiency of the proposed methods.
Resumo:
The focus of this thesis is to discuss the development and modeling of an interface architecture to be employed for interfacing analog signals in mixed-signal SOC. We claim that the approach that is going to be presented is able to achieve wide frequency range, and covers a large range of applications with constant performance, allied to digital configuration compatibility. Our primary assumptions are to use a fixed analog block and to promote application configurability in the digital domain, which leads to a mixed-signal interface. The use of a fixed analog block avoids the performance loss common to configurable analog blocks. The usage of configurability on the digital domain makes possible the use of all existing tools for high level design, simulation and synthesis to implement the target application, with very good performance prediction. The proposed approach utilizes the concept of frequency translation (mixing) of the input signal followed by its conversion to the ΣΔ domain, which makes possible the use of a fairly constant analog block, and also, a uniform treatment of input signal from DC to high frequencies. The programmability is performed in the ΣΔ digital domain where performance can be closely achieved according to application specification. The interface performance theoretical and simulation model are developed for design space exploration and for physical design support. Two prototypes are built and characterized to validate the proposed model and to implement some application examples. The usage of this interface as a multi-band parametric ADC and as a two channels analog multiplier and adder are shown. The multi-channel analog interface architecture is also presented. The characterization measurements support the main advantages of the approach proposed.
Resumo:
This work was developed in the Potengi river estuary, northern coastal city of Natal, located in the State of Rio Grande do Norte. The objective was to study the dynamics of Multitemporal Space in that estuary, however, analyze how this environment has behaved during the years 1988, 1994 and 2006. The definition of that, there was the fact that during that period, which occurred in that area and more intensive space exploration to practice economic activities as well as a greater and more rapid expansion of urban area. This study was supported by the Remote Sensing, who have been shown today, as an efficient analysis of the environmental studies, through geoprocessing techniques. From the performed analysis it was found that occurred during the period of study (1988 to 2006), a huge change in the design of the estuary of Potengi river. The figures showed the area of vegetation was decreased 65.22%, the deforested area increased by 70.44% and the shrimp activity grow 452.07% and the urban area was increased in 52.65% along that period described. Considering the numbers shown there that the process of occupying space in that area requires attention, because it is an environment that represents a huge contribution to the ecological balance.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Swing-by techniques are extensively used in interplanetary missions to minimize fuel consumption and to raise payloads of spaceships. The effectiveness of this type of maneuver has been proven since the beginning of space exploration. According to this premise, we have explored the existence of a natural and direct links between low Earth orbits and the lunar sphere of influence, to obtain low-energy interplanetary trajectories through swing-bys with the Moon and the Earth. The existence of these links are related to a family of retrograde periodic orbits around the Lagrangian equilibrium point L1 predicted for the circular, planar, restricted three-body Earth-Moon-particle problem. The trajectories in these links are sensitive to small disturbances. This enables them to be conveniently diverted reducing so the cost of the swing-by maneuver. These maneuvers allow us a gain in energy sufficient for the trajectories to escape from the Earth-Moon system and to stabilize in heliocentric orbits between the Earth and Venus or Earth and Mars. On the other hand, still within the Earth sphere of influence, and taking advantage of the sensitivity of the trajectories, is possible to design other swing-bys with the Earth or Moon. This allows the trajectories to have larger reach, until they can reach the orbit of other planets as Venus and Mars.(3σ)Broucke, R.A., Periodic Orbits in the Restricted Three-Body Problem with Earth-Moon Masses, JPL Technical Report 32-1168, 1968.
Resumo:
Virtual platforms are of paramount importance for design space exploration and their usage in early software development and verification is crucial. In particular, enabling accurate and fast simulation is specially useful, but such features are usually conflicting and tradeoffs have to be made. In this paper we describe how we integrated TLM communication mechanisms into a state-of-the-art, cycle-accurate, MPSoC simulation platform. More specifically, we show how we adapted ArchC fast functional instruction set simulators to the MPARM platform in order to achieve both fast simulation speed and accuracy. Our implementation led to a much faster hybrid platform, reaching speedups of up to 2.9 and 2.1x on average with negligible impact on power estimation accuracy (average 3.26% and 2.25% of standard deviation). © 2011 IEEE.
Resumo:
2001 SN263 is a triple system asteroid. Although it was discovery in 2001, in 2008 astronomical observation carried out by Arecibo observatory revealed that it is actually a system with three bodies orbiting each other. The main central body is an irregular object with a diameter about 2.8 km, while the other two are small objects with less than 1 km across. This system presents an orbital eccentricity of 0.47, with perihelion of 1.04 and aphelion of 1.99, which means that it can be considered as a Near Earth Object. This interesting system was chosen as the target for the Aster mission - first Brazilian space exploration undertaking. A small spacecraft with 150 kg of total mass, 30 kg of payload with 110 W available for the instruments, is scheduled to be launched in 2015, and in 2018 it will approach and will be put in orbit of the triple system. This spacecraft will use electric propulsion and in its payload it will carry image camera, laser rangefinder, infrared spectrometer, mass spectrometer, and experiments to be performed in its way to the asteroid. This mission represents a great challenge for the Brazilian space program. It is being structured to allow the full engagement of the Brazilian universities and technological companies in all the necessary developments to be carried out. In this paper, we present some aspects of this mission, including the transfer trajectories to be used, and details of buss and payload subsystems that are being developed and will be used. Copyright ©2010 by the International Astronautical Federation. All rights reserved.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Robots are needed to perform important field tasks such as hazardous material clean-up, nuclear site inspection, and space exploration. Unfortunately their use is not widespread due to their long development times and high costs. To make them practical, a modular design approach is proposed. Prefabricated modules are rapidly assembled to give a low-cost system for a specific task. This paper described the modular design problem for field robots and the application of a hierarchical selection process to solve this problem. Theoretical analysis and an example case study are presented. The theoretical analysis of the modular design problem revealed the large size of the search space. It showed the advantages of approaching the design on various levels. The hierarchical selection process applies physical rules to reduce the search space to a computationally feasible size and a genetic algorithm performs the final search in a greatly reduced space. This process is based on the observation that simple physically based rules can eliminate large sections of the design space to greatly simplify the search. The design process is applied to a duct inspection task. Five candidate robots were developed. Two of these robots are evaluated using detailed physical simulation. It is shown that the more obvious solution is not able to complete the task, while the non-obvious asymmetric design develop by the process is successful.
Resumo:
One of the problems in the analysis of nucleus-nucleus collisions is to get information on the value of the impact parameter b. This work consists in the application of pattern recognition techniques aimed at associating values of b to groups of events. To this end, a support vec- tor machine (SVM) classifier is adopted to analyze multifragmentation reactions. This method allows to backtracing the values of b through a particular multidimensional analysis. The SVM classification con- sists of two main phase. In the first one, known as training phase, the classifier learns to discriminate the events that are generated by two different model:Classical Molecular Dynamics (CMD) and Heavy- Ion Phase-Space Exploration (HIPSE) for the reaction: 58Ni +48 Ca at 25 AMeV. To check the classification of events in the second one, known as test phase, what has been learned is tested on new events generated by the same models. These new results have been com- pared to the ones obtained through others techniques of backtracing the impact parameter. Our tests show that, following this approach, the central collisions and peripheral collisions, for the CMD events, are always better classified with respect to the classification by the others techniques of backtracing. We have finally performed the SVM classification on the experimental data measured by NUCL-EX col- laboration with CHIMERA apparatus for the previous reaction.