901 resultados para Many-core systems
Resumo:
Implicit dynamic-algebraic equations, known in control theory as descriptor systems, arise naturally in many applications. Such systems may not be regular (often referred to as singular). In that case the equations may not have unique solutions for consistent initial conditions and arbitrary inputs and the system may not be controllable or observable. Many control systems can be regularized by proportional and/or derivative feedback.We present an overview of mathematical theory and numerical techniques for regularizing descriptor systems using feedback controls. The aim is to provide stable numerical techniques for analyzing and constructing regular control and state estimation systems and for ensuring that these systems are robust. State and output feedback designs for regularizing linear time-invariant systems are described, including methods for disturbance decoupling and mixed output problems. Extensions of these techniques to time-varying linear and nonlinear systems are discussed in the final section.
Resumo:
This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.
Resumo:
Passive states of quantum systems are states from which no system energy can be extracted by any cyclic (unitary) process. Gibbs states of all temperatures are passive. Strong local (SL) passive states are defined to allow any general quantum operation, but the operation is required to be local, being applied only to a specific subsystem. Any mixture of eigenstates in a system-dependent neighborhood of a nondegenerate entangled ground state is found to be SL passive. In particular, Gibbs states are SL passive with respect to a subsystem only at or below a critical system-dependent temperature. SL passivity is associated in many-body systems with the presence of ground state entanglement in a way suggestive of collective quantum phenomena such as quantum phase transitions, superconductivity, and the quantum Hall effect. The presence of SL passivity is detailed for some simple spin systems where it is found that SL passivity is neither confined to systems of only a few particles nor limited to the near vicinity of the ground state.
Resumo:
Range estimation is the core of many positioning systems such as radar, and Wireless Local Positioning Systems (WLPS). The estimation of range is achieved by estimating Time-of-Arrival (TOA). TOA represents the signal propagation delay between a transmitter and a receiver. Thus, error in TOA estimation causes degradation in range estimation performance. In wireless environments, noise, multipath, and limited bandwidth reduce TOA estimation performance. TOA estimation algorithms that are designed for wireless environments aim to improve the TOA estimation performance by mitigating the effect of closely spaced paths in practical (positive) signal-to-noise ratio (SNR) regions. Limited bandwidth avoids the discrimination of closely spaced paths. This reduces TOA estimation performance. TOA estimation methods are evaluated as a function of SNR, bandwidth, and the number of reflections in multipath wireless environments, as well as their complexity. In this research, a TOA estimation technique based on Blind signal Separation (BSS) is proposed. This frequency domain method estimates TOA in wireless multipath environments for a given signal bandwidth. The structure of the proposed technique is presented and its complexity and performance are theoretically evaluated. It is depicted that the proposed method is not sensitive to SNR, number of reflections, and bandwidth. In general, as bandwidth increases, TOA estimation performance improves. However, spectrum is the most valuable resource in wireless systems and usually a large portion of spectrum to support high performance TOA estimation is not available. In addition, the radio frequency (RF) components of wideband systems suffer from high cost and complexity. Thus, a novel, multiband positioning structure is proposed. The proposed technique uses the available (non-contiguous) bands to support high performance TOA estimation. This system incorporates the capabilities of cognitive radio (CR) systems to sense the available spectrum (also called white spaces) and to incorporate white spaces for high-performance localization. First, contiguous bands that are divided into several non-equal, narrow sub-bands that possess the same SNR are concatenated to attain an accuracy corresponding to the equivalent full band. Two radio architectures are proposed and investigated: the signal is transmitted over available spectrum either simultaneously (parallel concatenation) or sequentially (serial concatenation). Low complexity radio designs that handle the concatenation process sequentially and in parallel are introduced. Different TOA estimation algorithms that are applicable to multiband scenarios are studied and their performance is theoretically evaluated and compared to simulations. Next, the results are extended to non-contiguous, non-equal sub-bands with the same SNR. These are more realistic assumptions in practical systems. The performance and complexity of the proposed technique is investigated as well. This study’s results show that selecting bandwidth, center frequency, and SNR levels for each sub-band can adapt positioning accuracy.
Resumo:
Two of the indicators of the UN Millennium Development Goals ensuring environmental sustainability are energy use and per capita carbon dioxide emissions. The increasing urbanization and increasing world population may require increased energy use in order to transport enough safe drinking water to communities. In addition, the increase in water use would result in increased energy consumption, thereby resulting in increased green-house gas emissions that promote global climate change. The study of multiple Municipal Drinking Water Distribution Systems (MDWDSs) that relates various MDWDS aspects--system components and properties--to energy use is strongly desirable. The understanding of the relationship between system aspects and energy use aids in energy-efficient design. In this study, components of a MDWDS, and/or the characteristics associated with the component are termed as MDWDS aspects (hereafter--system aspects). There are many aspects of MDWDSs that affect the energy usage. Three system aspects (1) system-wide water demand, (2) storage tank parameters, and (3) pumping stations were analyzed in this study. The study involved seven MDWDSs to understand the relationship between the above-mentioned system aspects in relation with energy use. A MDWDSs model, EPANET 2.0, was utilized to analyze the seven systems. Six of the systems were real and one was a hypothetical system. The study presented here is unique in its statistical approach using seven municipal water distribution systems. The first system aspect studied was system-wide water demand. The analysis involved analyzing seven systems for the variation of water demand and its impact on energy use. To quantify the effects of water use reduction on energy use in a municipal water distribution system, the seven systems were modeled and the energy usage quantified for various amounts of water conservation. It was found that the effect of water conservation on energy use was linear for all seven systems and that all the average values of all the systems' energy use plotted on the same line with a high R 2 value. From this relationship, it can be ascertained that a 20% reduction in water demand results in approximately a 13% savings in energy use for all seven systems analyzed. This figure might hold true for many similar systems that are dominated by pumping and not gravity driven. The second system aspect analyzed was storage tank(s) parameters. Various tank parameters: (1) tank maximum water levels, (2) tank elevation, and (3) tank diameter were considered in this part of the study. MDWDSs use a significant amount of electrical energy for the pumping of water from low elevations (usually a source) to higher ones (usually storage tanks). The use of electrical energy has an effect on pollution emissions and, therefore, potential global climate change as well. Various values of these tank parameters were modeled on seven MDWDSs of various sizes using a network solver and the energy usage recorded. It was found that when averaged over all seven analyzed systems (1) the reduction of maximum tank water level by 50% results in a 2% energy reduction, (2) energy use for a change in tank elevation is system specific, and (2) a reduction of tank diameter of 50% results in approximately a 7% energy savings. The third system aspect analyzed in this study was pumping station parameters. A pumping station consists of one or more pumps. The seven systems were analyzed to understand the effect of the variation of pump horsepower and the number of booster stations on energy use. It was found that adding booster stations could save energy depending upon the system characteristics. For systems with flat topography, a single main pumping station was found to use less energy. In systems with a higher-elevation neighborhood, however, one or more booster pumps with a reduced main pumping station capacity used less energy. The energy savings for the seven systems was dependent on the number of boosters and ranged from 5% to 66% for the analyzed five systems with higher elevation neighborhoods (S3, S4, S5, S6, and S7). No energy savings was realized for the remaining two flat topography systems, S1, and S2. The present study analyzed and established the relationship between various system aspects and energy use in seven MDWDSs. This aids in estimating the amount of energy savings in MDWDSs. This energy savings would ultimately help reduce Greenhouse gases (GHGs) emissions including per capita CO 2 emissions thereby potentially lowering the global climate change effect. This will in turn contribute to meeting the MDG of ensuring environmental sustainability.
Resumo:
Few real software systems are built completely from scratch nowadays. Instead, systems are built iteratively and incrementally, while integrating and interacting with components from many other systems. Adaptation, reconfiguration and evolution are normal, ongoing processes throughout the lifecycle of a software system. Nevertheless the platforms, tools and environments we use to develop software are still largely based on an outmoded model that presupposes that software systems are closed and will not significantly evolve after deployment. We claim that in order to enable effective and graceful evolution of modern software systems, we must make these systems more amenable to change by (i) providing explicit, first-class models of software artifacts, change, and history at the level of the platform, (ii) continuously analysing static and dynamic evolution to track emergent properties, and (iii) closing the gap between the domain model and the developers' view of the evolving system. We outline our vision of dynamic, evolving software systems and identify the research challenges to realizing this vision.
Resumo:
Foliage Penetration (FOPEN) radar systems were introduced in 1960, and have been constantly improved by several organizations since that time. The use of Synthetic Aperture Radar (SAR) approaches for this application has important advantages, due to the need for high resolution in two dimensions. The design of this type of systems, however, includes some complications that are not present in standard SAR systems. FOPEN SAR systems need to operate with a low central frequency (VHF or UHF bands) in order to be able to penetrate the foliage. High bandwidth is also required to obtain high resolution. Due to the low central frequency, large integration angles are required during SAR image formation, and therefore the Range Migration Algorithm (RMA) is used. This project thesis identifies the three main complications that arise due to these requirements. First, a high fractional bandwidth makes narrowband propagation models no longer valid. Second, the VHF and UHF bands are used by many communications systems. The transmitted signal spectrum needs to be notched to avoid interfering them. Third, those communications systems cause Radio Frequency Interference (RFI) on the received signal. The thesis carries out a thorough analysis of the three problems, their degrading effects and possible solutions to compensate them. The UWB model is applied to the SAR signal, and the degradation induced by it is derived. The result is tested through simulation of both a single pulse stretch processor and the complete RMA image formation. Both methods show that the degradation is negligible, and therefore the UWB propagation effect does not need compensation. A technique is derived to design a notched transmitted signal. Then, its effect on the SAR image formation is evaluated analytically. It is shown that the stretch processor introduces a processing gain that reduces the degrading effects of the notches. The remaining degrading effect after processing gain is assessed through simulation, and an experimental graph of degradation as a function of percentage of nulled frequencies is obtained. The RFI is characterized and its effect on the SAR processor is derived. Once again, a processing gain is found to be introduced by the receiver. As the RFI power can be much higher than that of the desired signal, an algorithm is proposed to remove the RFI from the received signal before RMA processing. This algorithm is a modification of the Chirp Least Squares Algorithm (CLSA) explained in [4], which adapts it to deramped signals. The algorithm is derived analytically and then its performance is evaluated through simulation, showing that it is effective in removing the RFI and reducing the degradation caused by both RFI and notching. Finally, conclusions are drawn as to the importance of each one of the problems in SAR system design.
Resumo:
BIPV systems are small PV generation units spread out over the territory, and whose characteristics are very diverse. This makes difficult a cost-effective procedure for monitoring, fault detection, performance analyses, operation and maintenance. As a result, many problems affecting BIPV systems go undetected. In order to carry out effective automatic fault detection procedures, we need a performance indicator that is reliable and that can be applied on many PV systems at a very low cost. The existing approaches for analyzing the performance of PV systems are often based on the Performance Ratio (PR), whose accuracy depends on good solar irradiation data, which in turn can be very difficult to obtain or cost-prohibitive for the BIPV owner. We present an alternative fault detection procedure based on a performance indicator that can be constructed on the sole basis of the energy production data measured at the BIPV systems. This procedure does not require the input of operating conditions data, such as solar irradiation, air temperature, or wind speed. The performance indicator, called Performance to Peers (P2P), is constructed from spatial and temporal correlations between the energy output of neighboring and similar PV systems. This method was developed from the analysis of the energy production data of approximately 10,000 BIPV systems located in Europe. The results of our procedure are illustrated on the hourly, daily and monthly data monitored during one year at one BIPV system located in the South of Belgium. Our results confirm that it is possible to carry out automatic fault detection procedures without solar irradiation data. P2P proves to be more stable than PR most of the time, and thus constitutes a more reliable performance indicator for fault detection procedures. We also discuss the main limitations of this novel methodology, and we suggest several future lines of research that seem promising to improve on these procedures.
Resumo:
The development of mixed-criticality virtualized multi-core systems poses new challenges that are being subject of active research work. There is an additional complexity: it is now required to identify a set of partitions, and allocate applications to partitions. In this job, a number of issues have to be considered, such as the criticality level of the application, security and dependability requirements, time requirements granularity, etc. MultiPARTES [11] toolset relies on Model Driven Engineering (MDE), which is a suitable approach in this setting, as it helps to bridge the gap between design issues and partitioning concerns. MDE is changing the way systems are developed nowadays, reducing development time. In general, modelling approaches have shown their benefits when applied to embedded systems. These benefits have been achieved by fostering reuse with an intensive use of abstractions, or automating the generation of boiler-plate code.
Resumo:
Symmetry is commonly observed in many biological systems. Here we discuss representative examples of the role of symmetry in structural molecular biology. Point group symmetries are observed in many protein oligomers whose three-dimensional atomic structures have been elucidated by x-ray crystallography. Approximate symmetry also occurs in multidomain proteins. Symmetry often confers stability on the molecular system and results in economical usage of basic components to build the macromolecular structure. Symmetry is also associated with cooperativity. Mild perturbation from perfect symmetry may be essential in some systems for dynamic functions.
Resumo:
We describe Janus, a massively parallel FPGA-based computer optimized for the simulation of spin glasses, theoretical models for the behavior of glassy materials. FPGAs (as compared to GPUs or many-core processors) provide a complementary approach to massively parallel computing. In particular, our model problem is formulated in terms of binary variables, and floating-point operations can be (almost) completely avoided. The FPGA architecture allows us to run many independent threads with almost no latencies in memory access, thus updating up to 1024 spins per cycle. We describe Janus in detail and we summarize the physics results obtained in four years of operation of this machine; we discuss two types of physics applications: long simulations on very large systems (which try to mimic and provide understanding about the experimental non equilibrium dynamics), and low-temperature equilibrium simulations using an artificial parallel tempering dynamics. The time scale of our non-equilibrium simulations spans eleven orders of magnitude (from picoseconds to a tenth of a second). On the other hand, our equilibrium simulations are unprecedented both because of the low temperatures reached and for the large systems that we have brought to equilibrium. A finite-time scaling ansatz emerges from the detailed comparison of the two sets of simulations. Janus has made it possible to perform spin glass simulations that would take several decades on more conventional architectures. The paper ends with an assessment of the potential of possible future versions of the Janus architecture, based on state-of-the-art technology.
Resumo:
High-quality software, delivered on time and budget, constitutes a critical part of most products and services in modern society. Our government has invested billions of dollars to develop software assets, often to redevelop the same capability many times. Recognizing the waste involved in redeveloping these assets, in 1992 the Department of Defense issued the Software Reuse Initiative. The vision of the Software Reuse Initiative was "To drive the DoD software community from its current "re-invent the software" cycle to a process-driven, domain-specific, architecture-centric, library-based way of constructing software.'' Twenty years after issuing this initiative, there is evidence of this vision beginning to be realized in nonembedded systems. However, virtually every large embedded system undertaken has incurred large cost and schedule overruns. Investigations into the root cause of these overruns implicates reuse. Why are we seeing improvements in the outcomes of these large scale nonembedded systems and worse outcomes in embedded systems? This question is the foundation for this research. The experiences of the Aerospace industry have led to a number of questions about reuse and how the industry is employing reuse in embedded systems. For example, does reuse in embedded systems yield the same outcomes as in nonembedded systems? Are the outcomes positive? If the outcomes are different, it may indicate that embedded systems should not use data from nonembedded systems for estimation. Are embedded systems using the same development approaches as nonembedded systems? Does the development approach make a difference? If embedded systems develop software differently from nonembedded systems, it may mean that the same processes do not apply to both types of systems. What about the reuse of different artifacts? Perhaps there are certain artifacts that, when reused, contribute more or are more difficult to use in embedded systems. Finally, what are the success factors and obstacles to reuse? Are they the same in embedded systems as in nonembedded systems? The research in this dissertation is comprised of a series of empirical studies using professionals in the aerospace and defense industry as its subjects. The main focus has been to investigate the reuse practices of embedded systems professionals and nonembedded systems professionals and compare the methods and artifacts used against the outcomes. The research has followed a combined qualitative and quantitative design approach. The qualitative data were collected by surveying software and systems engineers, interviewing senior developers, and reading numerous documents and other studies. Quantitative data were derived from converting survey and interview respondents' answers into coding that could be counted and measured. From the search of existing empirical literature, we learned that reuse in embedded systems are in fact significantly different from nonembedded systems, particularly in effort in model based development approach and quality where the development approach was not specified. The questionnaire showed differences in the development approach used in embedded projects from nonembedded projects, in particular, embedded systems were significantly more likely to use a heritage/legacy development approach. There was also a difference in the artifacts used, with embedded systems more likely to reuse hardware, test products, and test clusters. Nearly all the projects reported using code, but the questionnaire showed that the reuse of code brought mixed results. One of the differences expressed by the respondents to the questionnaire was the difficulty in reuse of code for embedded systems when the platform changed. The semistructured interviews were performed to tell us why the phenomena in the review of literature and the questionnaire were observed. We asked respected industry professionals, such as senior fellows, fellows and distinguished members of technical staff, about their experiences with reuse. We learned that many embedded systems used heritage/legacy development approaches because their systems had been around for many years, before models and modeling tools became available. We learned that reuse of code is beneficial primarily when the code does not require modification, but, especially in embedded systems, once it has to be changed, reuse of code yields few benefits. Finally, while platform independence is a goal for many in nonembedded systems, it is certainly not a goal for the embedded systems professionals and in many cases it is a detriment. However, both embedded and nonembedded systems professionals endorsed the idea of platform standardization. Finally, we conclude that while reuse in embedded systems and nonembedded systems is different today, they are converging. As heritage embedded systems are phased out, models become more robust and platforms are standardized, reuse in embedded systems will become more like nonembedded systems.
Resumo:
The EU began railway reform in earnest around the turn of the century. Two ‘railway packages’ have meanwhile been adopted amounting to a series of directives and a third package has been proposed. A range of complementary initiatives has been undertaken or is underway. This BEEP Briefing inspects the main economic aspects of EU rail reform. After highlighting the dramatic loss of market share of rail since the 1960s, the case for reform is argued to rest on three arguments: the need for greater competitiveness of rail, promoting the (market driven) diversion of road haulage to rail as a step towards sustainable mobility in Europe, and an end to the disproportional claims on public budgets of Member States. The core of the paper deals respectively with market failures in rail and in the internal market for rail services; the complex economic issues underlying vertical separation (unbundling) and pricing options; and the methods, potential and problems of introducing competition in rail freight and in passenger services. Market failures in the rail sector are several (natural monopoly, economies of density, safety and asymmetries of information), exacerbated by no less than 7 technical and legal barriers precluding the practical operation of an internal rail market. The EU choice to opt for vertical unbundling (with benefits similar in nature as in other network industries e.g. preventing opaque cross-subsidisation and greater cost revelation) risks the emergence of considerable coordination costs. The adoption of marginal cost pricing is problematic on economic grounds (drawbacks include arbitrary cost allocation rules in the presence of large economies of scope and relatively large common costs; a non-optimal incentive system, holding back the growth of freight services; possibly anti-competitive effects of two-part tariffs). Without further detailed harmonisation, it may also lead to many different systems in Member States, causing even greater distortions. Insofar as freight could develop into a competitive market, a combination of Ramsey pricing (given the incentive for service providers to keep market share) and price ceilings based on stand-alone costs might be superior in terms of competition, market growth and regulatory oversight. The incipient cooperative approach for path coordination and allocation is welcome but likely to be seriously insufficient. The arguments to introduce competition, notably in freight, are valuable and many e.g. optimal cross-border services, quality differentiation as well as general quality improvement, larger scale for cost recovery and a decrease of rent seeking. Nevertheless, it is not correct to argue for the introduction of competition in rail tout court. It depends on the size of the market and on removing a host of barriers; it requires careful PSO definition and costing; also, coordination failures ought to be pre-empted. On the other hand, reform and competition cannot and should not be assessed in a static perspective. Conduct and cost structures will change with reform. Infrastructure and investment in technology are known to generate enormous potential for cost savings, especially when coupled with the EU interoperability programme. All this dynamism may well help to induce entry and further enlarge the (net) welfare gains from EU railway reform. The paper ends with a few pointers for the way forward in EU rail reform.
Resumo:
In this paper we give an overview of some very recent work, as well as presenting a new approach, on the stochastic simulation of multi-scaled systems involving chemical reactions. In many biological systems (such as genetic regulation and cellular dynamics) there is a mix between small numbers of key regulatory proteins, and medium and large numbers of molecules. In addition, it is important to be able to follow the trajectories of individual molecules by taking proper account of the randomness inherent in such a system. We describe different types of simulation techniques (including the stochastic simulation algorithm, Poisson Runge–Kutta methods and the balanced Euler method) for treating simulations in the three different reaction regimes: slow, medium and fast. We then review some recent techniques on the treatment of coupled slow and fast reactions for stochastic chemical kinetics and present a new approach which couples the three regimes mentioned above. We then apply this approach to a biologically inspired problem involving the expression and activity of LacZ and LacY proteins in E. coli, and conclude with a discussion on the significance of this work.
Resumo:
The performance of direct workers has a significant impact on the competitiveness of many manufacturing systems. Unfortunately, system designers are ill equipped to assess this impact during the design process. An opportunity exists to assist designers by expanding the capabilities of popular simulation modelling tools, and using them as a vehicle to better consider human factors during the process of system design manufacture. To support this requirement, this paper reports on an extensive review of literature that develops a theoretical framework, which summarizes the principal factors and relationships that such a modelling tool should incorporate.