995 resultados para Multi-configuration


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agent-oriented software engineering and software product lines are two promising software engineering techniques. Recent research work has been exploring their integration, namely multi-agent systems product lines (MAS-PLs), to promote reuse and variability management in the context of complex software systems. However, current product derivation approaches do not provide specific mechanisms to deal with MAS-PLs. This is essential because they typically encompass several concerns (e.g., trust, coordination, transaction, state persistence) that are constructed on the basis of heterogeneous technologies (e.g., object-oriented frameworks and platforms). In this paper, we propose the use of multi-level models to support the configuration knowledge specification and automatic product derivation of MAS-PLs. Our approach provides an agent-specific architecture model that uses abstractions and instantiation rules that are relevant to this application domain. In order to evaluate the feasibility and effectiveness of the proposed approach, we have implemented it as an extension of an existing product derivation tool, called GenArch. The approach has also been evaluated through the automatic instantiation of two MAS-PLs, demonstrating its potential and benefits to product derivation and configuration knowledge specification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

For many years, drainage design was mainly about providing sufficient network capacity. This traditional approach had been successful with the aid of computer software and technical guidance. However, the drainage design criteria had been evolving due to rapid population growth, urbanisation, climate change and increasing sustainability awareness. Sustainable drainage systems that bring benefits in addition to water management have been recommended as better alternatives to conventional pipes and storages. Although the concepts and good practice guidance had already been communicated to decision makers and public for years, network capacity still remains a key design focus in many circumstances while the additional benefits are generally considered secondary only. Yet, the picture is changing. The industry begins to realise that delivering multiple benefits should be given the top priority while the drainage service can be considered a secondary benefit instead. The shift in focus means the industry has to adapt to new design challenges. New guidance and computer software are needed to assist decision makers. For this purpose, we developed a new decision support system. The system consists of two main components – a multi-criteria evaluation framework for drainage systems and a multi-objective optimisation tool. Users can systematically quantify the performance, life-cycle costs and benefits of different drainage systems using the evaluation framework. The optimisation tool can assist users to determine combinations of design parameters such as the sizes, order and type of drainage components that maximise multiple benefits. In this paper, we will focus on the optimisation component of the decision support framework. The optimisation problem formation, parameters and general configuration will be discussed. We will also look at the sensitivity of individual variables and the benchmark results obtained using common multi-objective optimisation algorithms. The work described here is the output of an EngD project funded by EPSRC and XP Solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel AC Biosusceptometry (ACB) system with thirteen sensors it was implemented and characterized in vitro using magnetic phantoms. The system presenting coils in a coaxial arrangement with one pair of excitation coil outside and thirteen pairs of detection coils inside. A first-order gradiometric configuration was utilized for optimal detection of magnetic signals. Several physical parameters such as baseline, number of turns, excitation field and diameters were studied for improvement of the signal/noise ratio. This system exhibits an enhanced sensitivity and spatial resolution, due to the higher density of sensors/area. In the future those characteristics will turn possible to obtain images of magnetic marker or tracer in the gastrointestinal tract focusing on physiological and pharmaceutical studies. ACB emerged due to its interesting nature, noninvasiveness and low cost to investigate gastrointestinal parameters and this system can contribute for more accurate interpretation of biomedical signals and images

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Ciências Biológicas (Zoologia) - IBRC

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current SoC design trends are characterized by the integration of larger amount of IPs targeting a wide range of application fields. Such multi-application systems are constrained by a set of requirements. In such scenario network-on-chips (NoC) are becoming more important as the on-chip communication structure. Designing an optimal NoC for satisfying the requirements of each individual application requires the specification of a large set of configuration parameters leading to a wide solution space. It has been shown that IP mapping is one of the most critical parameters in NoC design, strongly influencing the SoC performance. IP mapping has been solved for single application systems using single and multi-objective optimization algorithms. In this paper we propose the use of a multi-objective adaptive immune algorithm (M(2)AIA), an evolutionary approach to solve the multi-application NoC mapping problem. Latency and power consumption were adopted as the target multi-objective functions. To compare the efficiency of our approach, our results are compared with those of the genetic and branch and bound multi-objective mapping algorithms. We tested 11 well-known benchmarks, including random and real applications, and combines up to 8 applications at the same SoC. The experimental results showed that the M(2)AIA decreases in average the power consumption and the latency 27.3 and 42.1 % compared to the branch and bound approach and 29.3 and 36.1 % over the genetic approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The LA-MC-ICP-MS method applied to U-Pb in situ dating is still rapidly evolving due to improvements in both lasers and ICP-MS. To test the validity and reproducibility of the method, 5 different zircon samples, including the standard Temora-2, ranging in age between 2.2 Ga and 246 Ma, were dated using both LA-MC-ICP-MS and SHRIMP. The selected zircons were dated by SHRIMP and, after gentle polishing, the laser spot was driven to the same site or on the same zircon phase with a 213 nm laser microprobe coupled to a multi-collector mixed system. The data were collected with a routine spot size of 25 μm and, in some cases, of 15 and 40 μm. A careful cross-calibration using a diluted U-Th-Pb solution to calculate the Faraday reading to counting rate conversion factors and the highly suitable GJ-1 standard zircon for external calibrations were of paramount importance for obtaining reliable results. All age results were concordant within the experimental errors. The assigned age errors using the LA-MC-ICP-MS technique were, in most cases, higher than those obtained by SHRIMP, but if we are not faced with a high resolution stratigraphy, the laser technique has certain advantages.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution of the electronics embedded applications forces electronics systems designers to match their ever increasing requirements. This evolution pushes the computational power of digital signal processing systems, as well as the energy required to accomplish the computations, due to the increasing mobility of such applications. Current approaches used to match these requirements relies on the adoption of application specific signal processors. Such kind of devices exploits powerful accelerators, which are able to match both performance and energy requirements. On the other hand, the too high specificity of such accelerators often results in a lack of flexibility which affects non-recurrent engineering costs, time to market, and market volumes too. The state of the art mainly proposes two solutions to overcome these issues with the ambition of delivering reasonable performance and energy efficiency: reconfigurable computing and multi-processors computing. All of these solutions benefits from the post-fabrication programmability, that definitively results in an increased flexibility. Nevertheless, the gap between these approaches and dedicated hardware is still too high for many application domains, especially when targeting the mobile world. In this scenario, flexible and energy efficient acceleration can be achieved by merging these two computational paradigms, in order to address all the above introduced constraints. This thesis focuses on the exploration of the design and application spectrum of reconfigurable computing, exploited as application specific accelerators for multi-processors systems on chip. More specifically, it introduces a reconfigurable digital signal processor featuring a heterogeneous set of reconfigurable engines, and a homogeneous multi-core system, exploiting three different flavours of reconfigurable and mask-programmable technologies as implementation platform for applications specific accelerators. In this work, the various trade-offs concerning the utilization multi-core platforms and the different configuration technologies are explored, characterizing the design space of the proposed approach in terms of programmability, performance, energy efficiency and manufacturing costs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Beamforming entails joint processing of multiple signals received or transmitted by an array of antennas. This thesis addresses the implementation of beamforming in two distinct systems, namely a distributed network of independent sensors, and a broad-band multi-beam satellite network. With the rising popularity of wireless sensors, scientists are taking advantage of the flexibility of these devices, which come with very low implementation costs. Simplicity, however, is intertwined with scarce power resources, which must be carefully rationed to ensure successful measurement campaigns throughout the whole duration of the application. In this scenario, distributed beamforming is a cooperative communication technique, which allows nodes in the network to emulate a virtual antenna array seeking power gains in the order of the size of the network itself, when required to deliver a common message signal to the receiver. To achieve a desired beamforming configuration, however, all nodes in the network must agree upon the same phase reference, which is challenging in a distributed set-up where all devices are independent. The first part of this thesis presents new algorithms for phase alignment, which prove to be more energy efficient than existing solutions. With the ever-growing demand for broad-band connectivity, satellite systems have the great potential to guarantee service where terrestrial systems can not penetrate. In order to satisfy the constantly increasing demand for throughput, satellites are equipped with multi-fed reflector antennas to resolve spatially separated signals. However, incrementing the number of feeds on the payload corresponds to burdening the link between the satellite and the gateway with an extensive amount of signaling, and to possibly calling for much more expensive multiple-gateway infrastructures. This thesis focuses on an on-board non-adaptive signal processing scheme denoted as Coarse Beamforming, whose objective is to reduce the communication load on the link between the ground station and space segment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this dissertation the influence of a precast concrete cladding system on structural robustness of a multi-storey steel-composite building is studied. The analysis follows the well-established framework developed at Imperial College London for the appraisal of robustness of multi-storey buildings. For this scope a simplified nonlinear model of a typical precast concrete façade-system is developed. Particular attention is given to the connection system between structural frame and panel, recognised as the driving component of the nonlinear behaviour of the façade-system. Only connections involved in the gravity load path are evaluated (bearing connections). Together with standard connection, a newly proposed system (Slotted Bearing Connection) is designed to achieve a more ductile behaviour of the panel-connection system. A parametric study involving the dimensions of panel-connection components is developed to search for an optimal configuration of the bearing connection. From the appraisal of structural robustness of the panelised frame it is found that the standard connection systems may reduce the robustness of a multi-storey frame due to a poor ductile behaviour while the newly proposed connection is able to guarantee an enhanced response to the panelised multi-storey frame thanks to a higher ductility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays the rise of non-recurring engineering (NRE) costs associated with complexity is becoming a major factor in SoC design, limiting both scaling opportunities and the flexibility advantages offered by the integration of complex computational units. The introduction of embedded programmable elements can represent an appealing solution, able both to guarantee the desired flexibility and upgradabilty and to widen the SoC market. In particular embedded FPGA (eFPGA) cores can provide bit-level optimization for those applications which benefits from synthesis, paying on the other side in terms of performance penalties and area overhead with respect to standard cell ASIC implementations. In this scenario this thesis proposes a design methodology for a synthesizable programmable device designed to be embedded in a SoC. A soft-core embedded FPGA (eFPGA) is hence presented and analyzed in terms of the opportunities given by a fully synthesizable approach, following an implementation flow based on Standard-Cell methodology. A key point of the proposed eFPGA template is that it adopts a Multi-Stage Switching Network (MSSN) as the foundation of the programmable interconnects, since it can be efficiently synthesized and optimized through a standard cell based implementation flow, ensuring at the same time an intrinsic congestion-free network topology. The evaluation of the flexibility potentialities of the eFPGA has been performed using different technology libraries (STMicroelectronics CMOS 65nm and BCD9s 0.11μm) through a design space exploration in terms of area-speed-leakage tradeoffs, enabled by the full synthesizability of the template. Since the most relevant disadvantage of the adopted soft approach, compared to a hardcore, is represented by a performance overhead increase, the eFPGA analysis has been made targeting small area budgets. The generation of the configuration bitstream has been obtained thanks to the implementation of a custom CAD flow environment, and has allowed functional verification and performance evaluation through an application-aware analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the analytic study of dynamics of Multi--Rotor Unmanned Aerial Vehicles. It is conceived to give a set of mathematical instruments apt to the theoretical study and design of these flying machines. The entire work is organized in analogy with classical academic texts about airplane flight dynamics. First, the non--linear equations of motion are defined and all the external actions are modeled, with particular attention to rotors aerodynamics. All the equations are provided in a form, and with personal expedients, to be directly exploitable in a simulation environment. This has requited an answer to questions like the trim of such mathematical systems. All the treatment is developed aiming at the description of different multi--rotor configurations. Then, the linearized equations of motion are derived. The computation of the stability and control derivatives of the linear model is carried out. The study of static and dynamic stability characteristics is, thus, addressed, showing the influence of the various geometric and aerodynamic parameters of the machine and in particular of the rotors. All the theoretic results are finally utilized in two interesting cases. One concerns the design of control systems for attitude stabilization. The linear model permits the tuning of linear controllers gains and the non--linear model allows the numerical testing. The other case is the study of the performances of an innovative configuration of quad--rotor aircraft. With the non--linear model the feasibility of maneuvers impossible for a traditional quad--rotor is assessed. The linear model is applied to the controllability analysis of such an aircraft in case of actuator block.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to investigate whether it is possible to pool together diffusion spectrum imaging data from four different scanners, located at three different sites. Two of the scanners had identical configuration whereas two did not. To measure the variability, we extracted three scalar maps (ADC, FA and GFA) from the DSI and utilized a region and a tract-based analysis. Additionally, a phantom study was performed to rule out some potential factors arising from the scanner performance in case some systematic bias occurred in the subject study. This work was split into three experiments: intra-scanner reproducibility, reproducibility with twin-scanner settings and reproducibility with other configurations. Overall for the intra-scanner and twin-scanner experiments, the region-based analysis coefficient of variation (CV) was in a range of 1%-4.2% and below 3% for almost every bundle for the tract-based analysis. The uncinate fasciculus showed the worst reproducibility, especially for FA and GFA values (CV 3.7-6%). For the GFA and FA maps, an ICC value of 0.7 and above is observed in almost all the regions/tracts. Looking at the last experiment, it was found that there is a very high similarity of the outcomes from the two scanners with identical setting. However, this was not the case for the two other imagers. Given the fact that the overall variation in our study is low for the imagers with identical settings, our findings support the feasibility of cross-site pooling of DSI data from identical scanners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The chronology and configuration of the Svalbard Barents Sea Ice Sheet (SBSIS) during the Late Weichselian (LW) are based on few and geographically scattered data. Thus, the timing and configuration of the SBSIS has been a subject of extensive debate. We present provenance data of erratic boulders and cosmogenic 10Be ages of bedrock and boulders from Northwest Spitsbergen (NWS), Svalbard to determine the thickness, configuration and chronology during the LW. We sampled bedrock and boulders of mountain summits and summit slopes, along with erratic boulders from coastal locations around NWS. We suggest that a local ice dome over central NWS during LW drained radially in all directions. Provenance data from erratic boulders from northern coastal lowland Reinsdyrflya suggest northeastward ice flow through Liefdefjorden. 10Be ages of high-elevation erratic boulders in central NWS (687–836 m above sea level) ranging from 18.3 ± 1.3 ka to 21.7 ± 1.4 ka, indicate that the centre of a local ice dome was at least 300 m thicker than at present. 10Be ages of all high-elevation erratics (>400 m above sea level, central and coastal locations) indicate the onset of ice dome thinning at 25–20 ka. 10Be ages from erratic boulders on Reinsdyrflya ranging from 11.1 ± 0.8 ka to 21.4 ± 1.7 ka, indicate an ice cover over the entire Reinsdyrflya during LW and a complete deglaciation prior to the Holocene, but apparently later than the thinning in the mountains. Lack of moraine deposits, but the preservation of beach terraces, suggest that the ice covering this peninsula possibly was cold-based and that Reinsdyrflya was part of an inter ice-stream area covered by slow-flowing ice, as opposed to the adjacent fjord, which possibly was filled by a fast-flowing ice stream. Despite the early thinning of the ice sheet (25–20 ka) we find a later timing of deglaciation of the fjords and the distal lowlands. Several bedrock samples (10Be) from vertical transects in the central mountains of NWS pre-date the LW, and suggest either ice free or pervasive cold-based ice conditions. Our reconstruction is aligned with the previously suggested hypothesis that a complex multi-dome ice-sheet-configuration occupied Svalbard and the Barents Sea during LW, with numerous drainage basins feeding fast ice streams, separated by slow flowing, possibly cold-based, inter ice-stream areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Point Distribution Models (PDM) are among the most popular shape description techniques and their usefulness has been demonstrated in a wide variety of medical imaging applications. However, to adequately characterize the underlying modeled population it is essential to have a representative number of training samples, which is not always possible. This problem is especially relevant as the complexity of the modeled structure increases, being the modeling of ensembles of multiple 3D organs one of the most challenging cases. In this paper, we introduce a new GEneralized Multi-resolution PDM (GEM-PDM) in the context of multi-organ analysis able to efficiently characterize the different inter-object relations, as well as the particular locality of each object separately. Importantly, unlike previous approaches, the configuration of the algorithm is automated thanks to a new agglomerative landmark clustering method proposed here, which equally allows us to identify smaller anatomically significant regions within organs. The significant advantage of the GEM-PDM method over two previous approaches (PDM and hierarchical PDM) in terms of shape modeling accuracy and robustness to noise, has been successfully verified for two different databases of sets of multiple organs: six subcortical brain structures, and seven abdominal organs. Finally, we propose the integration of the new shape modeling framework into an active shape-model-based segmentation algorithm. The resulting algorithm, named GEMA, provides a better overall performance than the two classical approaches tested, ASM, and hierarchical ASM, when applied to the segmentation of 3D brain MRI.