844 resultados para computation- and data-intensive applications


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent technological advancements and market trends are causing an interesting phenomenon towards the convergence of High-Performance Computing (HPC) and Embedded Computing (EC) domains. On one side, new kinds of HPC applications are being required by markets needing huge amounts of information to be processed within a bounded amount of time. On the other side, EC systems are increasingly concerned with providing higher performance in real-time, challenging the performance capabilities of current architectures. The advent of next-generation many-core embedded platforms has the chance of intercepting this converging need for predictable high-performance, allowing HPC and EC applications to be executed on efficient and powerful heterogeneous architectures integrating general-purpose processors with many-core computing fabrics. To this end, it is of paramount importance to develop new techniques for exploiting the massively parallel computation capabilities of such platforms in a predictable way. P-SOCRATES will tackle this important challenge by merging leading research groups from the HPC and EC communities. The time-criticality and parallelisation challenges common to both areas will be addressed by proposing an integrated framework for executing workload-intensive applications with real-time requirements on top of next-generation commercial-off-the-shelf (COTS) platforms based on many-core accelerated architectures. The project will investigate new HPC techniques that fulfil real-time requirements. The main sources of indeterminism will be identified, proposing efficient mapping and scheduling algorithms, along with the associated timing and schedulability analysis, to guarantee the real-time and performance requirements of the applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three dimensional model design is a well-known and studied field, with numerous real-world applications. However, the manual construction of these models can often be time-consuming to the average user, despite the advantages o ffered through computational advances. This thesis presents an approach to the design of 3D structures using evolutionary computation and L-systems, which involves the automated production of such designs using a strict set of fitness functions. These functions focus on the geometric properties of the models produced, as well as their quantifiable aesthetic value - a topic which has not been widely investigated with respect to 3D models. New extensions to existing aesthetic measures are discussed and implemented in the presented system in order to produce designs which are visually pleasing. The system itself facilitates the construction of models requiring minimal user initialization and no user-based feedback throughout the evolutionary cycle. The genetic programming evolved models are shown to satisfy multiple criteria, conveying a relationship between their assigned aesthetic value and their perceived aesthetic value. Exploration into the applicability and e ffectiveness of a multi-objective approach to the problem is also presented, with a focus on both performance and visual results. Although subjective, these results o er insight into future applications and study in the fi eld of computational aesthetics and automated structure design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les systèmes multiprocesseurs sur puce électronique (On-Chip Multiprocessor [OCM]) sont considérés comme les meilleures structures pour occuper l'espace disponible sur les circuits intégrés actuels. Dans nos travaux, nous nous intéressons à un modèle architectural, appelé architecture isométrique de systèmes multiprocesseurs sur puce, qui permet d'évaluer, de prédire et d'optimiser les systèmes OCM en misant sur une organisation efficace des nœuds (processeurs et mémoires), et à des méthodologies qui permettent d'utiliser efficacement ces architectures. Dans la première partie de la thèse, nous nous intéressons à la topologie du modèle et nous proposons une architecture qui permet d'utiliser efficacement et massivement les mémoires sur la puce. Les processeurs et les mémoires sont organisés selon une approche isométrique qui consiste à rapprocher les données des processus plutôt que d'optimiser les transferts entre les processeurs et les mémoires disposés de manière conventionnelle. L'architecture est un modèle maillé en trois dimensions. La disposition des unités sur ce modèle est inspirée de la structure cristalline du chlorure de sodium (NaCl), où chaque processeur peut accéder à six mémoires à la fois et où chaque mémoire peut communiquer avec autant de processeurs à la fois. Dans la deuxième partie de notre travail, nous nous intéressons à une méthodologie de décomposition où le nombre de nœuds du modèle est idéal et peut être déterminé à partir d'une spécification matricielle de l'application qui est traitée par le modèle proposé. Sachant que la performance d'un modèle dépend de la quantité de flot de données échangées entre ses unités, en l'occurrence leur nombre, et notre but étant de garantir une bonne performance de calcul en fonction de l'application traitée, nous proposons de trouver le nombre idéal de processeurs et de mémoires du système à construire. Aussi, considérons-nous la décomposition de la spécification du modèle à construire ou de l'application à traiter en fonction de l'équilibre de charge des unités. Nous proposons ainsi une approche de décomposition sur trois points : la transformation de la spécification ou de l'application en une matrice d'incidence dont les éléments sont les flots de données entre les processus et les données, une nouvelle méthodologie basée sur le problème de la formation des cellules (Cell Formation Problem [CFP]), et un équilibre de charge de processus dans les processeurs et de données dans les mémoires. Dans la troisième partie, toujours dans le souci de concevoir un système efficace et performant, nous nous intéressons à l'affectation des processeurs et des mémoires par une méthodologie en deux étapes. Dans un premier temps, nous affectons des unités aux nœuds du système, considéré ici comme un graphe non orienté, et dans un deuxième temps, nous affectons des valeurs aux arcs de ce graphe. Pour l'affectation, nous proposons une modélisation des applications décomposées en utilisant une approche matricielle et l'utilisation du problème d'affectation quadratique (Quadratic Assignment Problem [QAP]). Pour l'affectation de valeurs aux arcs, nous proposons une approche de perturbation graduelle, afin de chercher la meilleure combinaison du coût de l'affectation, ceci en respectant certains paramètres comme la température, la dissipation de chaleur, la consommation d'énergie et la surface occupée par la puce. Le but ultime de ce travail est de proposer aux architectes de systèmes multiprocesseurs sur puce une méthodologie non traditionnelle et un outil systématique et efficace d'aide à la conception dès la phase de la spécification fonctionnelle du système.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

De nos jours, les applications de grande taille sont développées à l’aide de nom- breux cadres d’applications (frameworks) et intergiciels (middleware). L’utilisation ex- cessive d’objets temporaires est un problème de performance commun à ces applications. Ce problème est appelé “object churn”. Identifier et comprendre des sources d’“object churn” est une tâche difficile et laborieuse, en dépit des récentes avancées dans les tech- niques d’analyse automatiques. Nous présentons une approche visuelle interactive conçue pour aider les développeurs à explorer rapidement et intuitivement le comportement de leurs applications afin de trouver les sources d’“object churn”. Nous avons implémenté cette technique dans Vasco, une nouvelle plate-forme flexible. Vasco se concentre sur trois principaux axes de con- ception. Premièrement, les données à visualiser sont récupérées dans les traces d’exécu- tion et analysées afin de calculer et de garder seulement celles nécessaires à la recherche des sources d’“object churn”. Ainsi, des programmes de grande taille peuvent être vi- sualisés tout en gardant une représentation claire et compréhensible. Deuxièmement, l’utilisation d’une représentation intuitive permet de minimiser l’effort cognitif requis par la tâche de visualisation. Finalement, la fluidité des transitions et interactions permet aux utilisateurs de garder des informations sur les actions accomplies. Nous démontrons l’efficacité de l’approche par l’identification de sources d’“object churn” dans trois ap- plications utilisant intensivement des cadres d’applications framework-intensive, inclu- ant un système commercial.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis deals with the preparation of chemical, optical, thermal and electrical characterization of five compounds, namely metal free naphthalocyanine, vanadyl napthalocyanine, zinc naphlocyanine, europium dinaphthalocyanine, and europium diphthalocyanine in the pristine and iodine-doped forms. Two important technological properties of these compounds have been investigated. The electrical properties are important in applications sensors and semiconductor lasers. Opto-thermal properties assume significance for optical imaging and data recording. The electrical properties were investigated by dc and ac techniques. This work has revealed some novel information on the conduction mechanism in five macrocyclic compounds and their iodine-doped forms. Also useful data on the thermal diffusivity of the target compounds have been obtained by optical techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The theme of the thesis is centred around one important aspect of wireless sensor networks; the energy-efficiency.The limited energy source of the sensor nodes calls for design of energy-efficient routing protocols. The schemes for protocol design should try to minimize the number of communications among the nodes to save energy. Cluster based techniques were found energy-efficient. In this method clusters are formed and data from different nodes are collected under a cluster head belonging to each clusters and then forwarded it to the base station.Appropriate cluster head selection process and generation of desirable distribution of the clusters can reduce energy consumption of the network and prolong the network lifetime. In this work two such schemes were developed for static wireless sensor networks.In the first scheme, the energy wastage due to cluster rebuilding incorporating all the nodes were addressed. A tree based scheme is presented to alleviate this problem by rebuilding only sub clusters of the network. An analytical model of energy consumption of proposed scheme is developed and the scheme is compared with existing cluster based scheme. The simulation study proved the energy savings observed.The second scheme concentrated to build load-balanced energy efficient clusters to prolong the lifetime of the network. A voting based approach to utilise the neighbor node information in the cluster head selection process is proposed. The number of nodes joining a cluster is restricted to have equal sized optimum clusters. Multi-hop communication among the cluster heads is also introduced to reduce the energy consumption. The simulation study has shown that the scheme results in balanced clusters and the network achieves reduction in energy consumption.The main conclusion from the study was the routing scheme should pay attention on successful data delivery from node to base station in addition to the energy-efficiency. The cluster based protocols are extended from static scenario to mobile scenario by various authors. None of the proposals addresses cluster head election appropriately in view of mobility. An elegant scheme for electing cluster heads is presented to meet the challenge of handling cluster durability when all the nodes in the network are moving. The scheme has been simulated and compared with a similar approach.The proliferation of sensor networks enables users with large set of sensor information to utilise them in various applications. The sensor network programming is inherently difficult due to various reasons. There must be an elegant way to collect the data gathered by sensor networks with out worrying about the underlying structure of the network. The final work presented addresses a way to collect data from a sensor network and present it to the users in a flexible way.A service oriented architecture based application is built and data collection task is presented as a web service. This will enable composition of sensor data from different sensor networks to build interesting applications. The main objective of the thesis was to design energy-efficient routing schemes for both static as well as mobile sensor networks. A progressive approach was followed to achieve this goal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The overall focus of the thesis involves the synthesis and characterization of CdSe QDs overcoated with shell materials for various biological and chemical sensing applications. Second chapter deals with the synthesis and characterization of CdSe and CdSe/ZnS core shell QDs. The primary attention of this work is to develop a simple method based on photoinduced charge transfer to optimize the shell thickness. Synthesis of water soluble CdSe QDs, their cytotoxicity analysis and investigation of nonlinear optical properties form the subject of third chapter. Final chapter deals with development of QD based sensor systems for the selective detection of biologically and environmentally important analytes from aqueous media.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modern computer systems are plagued with stability and security problems: applications lose data, web servers are hacked, and systems crash under heavy load. Many of these problems or anomalies arise from rare program behavior caused by attacks or errors. A substantial percentage of the web-based attacks are due to buffer overflows. Many methods have been devised to detect and prevent anomalous situations that arise from buffer overflows. The current state-of-art of anomaly detection systems is relatively primitive and mainly depend on static code checking to take care of buffer overflow attacks. For protection, Stack Guards and I-leap Guards are also used in wide varieties.This dissertation proposes an anomaly detection system, based on frequencies of system calls in the system call trace. System call traces represented as frequency sequences are profiled using sequence sets. A sequence set is identified by the starting sequence and frequencies of specific system calls. The deviations of the current input sequence from the corresponding normal profile in the frequency pattern of system calls is computed and expressed as an anomaly score. A simple Bayesian model is used for an accurate detection.Experimental results are reported which show that frequency of system calls represented using sequence sets, captures the normal behavior of programs under normal conditions of usage. This captured behavior allows the system to detect anomalies with a low rate of false positives. Data are presented which show that Bayesian Network on frequency variations responds effectively to induced buffer overflows. It can also help administrators to detect deviations in program flow introduced due to errors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Antennas are necessary and vital components of communication and radar systems, but sometimes their inability to adjust to new operating scenarios can limit system performance. Reconfigurable antennas can adjust with changing system requirements or environmental conditions and provide additional levels of functionality that may result in wider instantaneous frequency bandwidths, more extensive scan volumes, and radiation patterns with more desirable side lobe distributions. Their agility and diversity created new horizons for different types of applications especially in cognitive radio, Multiple Input Multiple Output Systems, satellites and many other applications. Reconfigurable antennas satisfy the requirements for increased functionality, such as direction finding, beam steering, radar, control and command, within a confined volume. The intelligence associated with the reconfigurable antennas revolved around switching mechanisms utilized. In the present work, we have investigated frequency reconfigurable polarization diversity antennas using two methods: 1. By using low-loss, high-isolation switches such as PIN diode, the antenna can be structurally reconfigured to maintain the elements near their resonant dimensions for different frequency bands and/or polarization. 2. Secondly, the incorporation of variable capacitors or varactors, to overcome many problems faced in using switches and their biasing. The performances of these designs have been studied using standard simulation tools used in industry/academia and they have been experimentally verified. Antenna design guidelines are also deduced by accounting the resonances. One of the major contributions of the thesis lies in the analysis of the designed antennas using FDTD based numerical computation to validate their performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent trends envisage multi-standard architectures as a promising solution for the future wireless transceivers to attain higher system capacities and data rates. The computationally intensive decimation filter plays an important role in channel selection for multi-mode systems. An efficient reconfigurable implementation is a key to achieve low power consumption. To this end, this paper presents a dual-mode Residue Number System (RNS) based decimation filter which can be programmed for WCDMA and 802.16e standards. Decimation is done using multistage, multirate finite impulse response (FIR) filters. These FIR filters implemented in RNS domain offers high speed because of its carry free operation on smaller residues in parallel channels. Also, the FIR filters exhibit programmability to a selected standard by reconfiguring the hardware architecture. The total area is increased only by 24% to include WiMAX compared to a single mode WCDMA transceiver. In each mode, the unused parts of the overall architecture is powered down and bypassed to attain power saving. The performance of the proposed decimation filter in terms of critical path delay and area are tabulated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Decision trees are very powerful tools for classification in data mining tasks that involves different types of attributes. When coming to handling numeric data sets, usually they are converted first to categorical types and then classified using information gain concepts. Information gain is a very popular and useful concept which tells you, whether any benefit occurs after splitting with a given attribute as far as information content is concerned. But this process is computationally intensive for large data sets. Also popular decision tree algorithms like ID3 cannot handle numeric data sets. This paper proposes statistical variance as an alternative to information gain as well as statistical mean to split attributes in completely numerical data sets. The new algorithm has been proved to be competent with respect to its information gain counterpart C4.5 and competent with many existing decision tree algorithms against the standard UCI benchmarking datasets using the ANOVA test in statistics. The specific advantages of this proposed new algorithm are that it avoids the computational overhead of information gain computation for large data sets with many attributes, as well as it avoids the conversion to categorical data from huge numeric data sets which also is a time consuming task. So as a summary, huge numeric datasets can be directly submitted to this algorithm without any attribute mappings or information gain computations. It also blends the two closely related fields statistics and data mining

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In general, linear- optic, thermo- optic and nonlinear- optical studies on CdSe QDs based nano uids and their special applications in solar cells and random lasers have been studied in this thesis. Photo acous- tic and thermal lens studies are the two characterization methods used for thermo- optic studies whereas Z- scan method is used for nonlinear- optical charecterization. In all these cases we have selected CdSe QDs based nano uid as potential photonic material and studied the e ect of metal NPs on its properties. Linear optical studies on these materials have been done using vari- ous characterization methods and photo induced studies is one of them. Thermal lens studies on these materials give information about heat transport properties of these materials and their suitability for applica- tions such as coolant and insulators. Photo acoustic studies shows the e ect of light on the absorption energy levels of the materials. We have also observed that these materials can be used as optical limiters in the eld of nonlinear optics. Special applications of these materials have been studied in the eld of solar cell such as QDSSCs, where CdSe QDs act as the sensitizing materials for light harvesting. Random lasers have many applications in the eld of laser technology, in which CdSe QDs act as scattering media for the gain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Erfolgskontrollen für Agrarumweltprogramme bezogen sich bisher meistens auf einzelne Flächen oder auf programmbezogene, großräumige Evaluationen. Es wurde jedoch kaum untersucht, wie sich die Maßnahmen auf die Entwicklung einzelner Naturräume auswirken. Auch gab es keine Studien, welche die Wechselwirkungen zwischen den Beweggründen der Landnutzer auf der ei-nen- sowie Landnutzung und Vegetation auf der anderen Seite interpretierten. Die Dissertation Wirkungen von Extensivierungs- und Vertragsnaturschutzprogrammen auf die Entwick-lung einer »gerade noch aktuellen Agrarlandschaft« hat diese Lücke geschlossen. Sie erklärt, welche Bedeutung die hessischen Programme HELP und HEKUL für den hohen Anteil naturschutzfachlich wertvollen Grünlands im Rommeroder Hügel-land westlich des Meißner haben. Untersuchungsgegenstand waren die Grünlandvegetation und die landwirtschaftlichen Betriebe mit ihren Menschen und deren Beweggründen. Diese Inhalte er-forderten eine Vorgehensweise, die sowohl sozialwissenschaftliche als auch naturwissenschaftliche Methoden einbindet, um Bezüge zwischen Betrieben und Grünlandvegetation zu er-kennen. Umfangreiche pflanzensoziologische Untersuchungen und Interviews der Betriebsleiter waren Grundlage für eine Schlagdatenbank und weitergehende Auswertungen. Die Interpretation vegetationskundlicher Erhebungen im Kontext betrieblicher Entscheidungen und Beweggründe erforderte es, althergebrachte Ansätze in neuer Form zu verknüpfen. Die Bewertung der Programmwirkungen stützte sich auf die Schlagdatenbank und auf vier Szena-rien zur zukünftigen Gebietsentwicklung bei unterschiedlichen Programmfortschreibungen. Zur Darstellung von Erhebungen und Ergebnissen entstand eine Vielzahl thematischer Karten. Der überdurchschnittlich hohe Anteil naturschutzfachlich bedeutsamer Grünlandtypen auf den Programmflächen und die Interpretation der Szenarien belegten eine hohe Wirksamkeit von HELP und HEKUL im Gebiet. Nicht nur auf den Vertragsnaturschutzflächen des HELP, sondern auch auf dem HEKUL-Grünland sind naturschutzfachlich bedeutende Vegetationstypen überproportional vertreten. Die vier Szenarien ließen erkennen, dass eine Beschränkung des HELP auf Schutzgebiete, eine Abschaffung der HEKUL-Grünlandextensivierung oder gar eine ersatzlose Strei-chung beider Programme zu erheblichen Verschlechterungen der naturschutzfachlichen Situation führen würde. Gleichzeitig war festzustellen, dass es ohne die landwirtschaftlich schwierigen natur-räumlichen Verhältnisse sowie eine eher großteilige Agrarstruktur mit überdurchschnittlich flächen-starken und wirtschaftlich stabilen Vollerwerbsbetrieben keine so deutlichen Programmwirkungen gegeben hätte. Auch die Tatsache, dass viele Landwirte eine intensive Landwirtschaft aus innerer Überzeugung ablehnen und mit einer erheblich geringeren Stickstoffintensität wirtschaften als es HEKUL verlangt, wirkte verstärkend. Die große Bedeutung individueller Beweggründe einzelner Betriebsleiter wurde auch in den engen Beziehungen einzelner Grünland-Pflanzengesellschaften zu bestimmten Betriebstypen und sogar einzelnen Höfen sichtbar, deren Beschreibung und Interpretation wichtige Erkenntnisse zu den so-zioökonomischen Voraussetzungen verschiedener Vegetationstypen lieferte. Für die zukünftige Entwicklung der hessischen Agrarumweltförderung empfiehlt die Dissertation eine Einführung und bevorzugte Anwendung ergebnisorientierter Honorierungsverfahren, eine bessere Berücksichtigung des gering gedüngten Grünlands über eine differenzierte Förderung sehr extensiver Düngeregime, eine stärkere Modularisierung des Gesamtprogramms und eine Durchführung aller Maßnahmen im Vertragsverfahren. Die betriebszweigbezogene Grünlandextensivierung sollte zukünftig in Kulissen angeboten werden, in denen ein verstärktes Wechseln von Betrieben in die Mindestpflege nach DirektZahlVerpflV zu erwarten ist.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed systems are one of the most vital components of the economy. The most prominent example is probably the internet, a constituent element of our knowledge society. During the recent years, the number of novel network types has steadily increased. Amongst others, sensor networks, distributed systems composed of tiny computational devices with scarce resources, have emerged. The further development and heterogeneous connection of such systems imposes new requirements on the software development process. Mobile and wireless networks, for instance, have to organize themselves autonomously and must be able to react to changes in the environment and to failing nodes alike. Researching new approaches for the design of distributed algorithms may lead to methods with which these requirements can be met efficiently. In this thesis, one such method is developed, tested, and discussed in respect of its practical utility. Our new design approach for distributed algorithms is based on Genetic Programming, a member of the family of evolutionary algorithms. Evolutionary algorithms are metaheuristic optimization methods which copy principles from natural evolution. They use a population of solution candidates which they try to refine step by step in order to attain optimal values for predefined objective functions. The synthesis of an algorithm with our approach starts with an analysis step in which the wanted global behavior of the distributed system is specified. From this specification, objective functions are derived which steer a Genetic Programming process where the solution candidates are distributed programs. The objective functions rate how close these programs approximate the goal behavior in multiple randomized network simulations. The evolutionary process step by step selects the most promising solution candidates and modifies and combines them with mutation and crossover operators. This way, a description of the global behavior of a distributed system is translated automatically to programs which, if executed locally on the nodes of the system, exhibit this behavior. In our work, we test six different ways for representing distributed programs, comprising adaptations and extensions of well-known Genetic Programming methods (SGP, eSGP, and LGP), one bio-inspired approach (Fraglets), and two new program representations called Rule-based Genetic Programming (RBGP, eRBGP) designed by us. We breed programs in these representations for three well-known example problems in distributed systems: election algorithms, the distributed mutual exclusion at a critical section, and the distributed computation of the greatest common divisor of a set of numbers. Synthesizing distributed programs the evolutionary way does not necessarily lead to the envisaged results. In a detailed analysis, we discuss the problematic features which make this form of Genetic Programming particularly hard. The two Rule-based Genetic Programming approaches have been developed especially in order to mitigate these difficulties. In our experiments, at least one of them (eRBGP) turned out to be a very efficient approach and in most cases, was superior to the other representations.