243 resultados para Arquiteturas recon


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This Thesis main objective is to implement a supporting architecture to Autonomic Hardware systems, capable of manage the hardware running in reconfigurable devices. The proposed architecture implements manipulation, generation and communication functionalities, using the Context Oriented Active Repository approach. The solution consists in a Hardware-Software based architecture called "Autonomic Hardware Manager (AHM)" that contains an Active Repository of Hardware Components. Using the repository the architecture will be able to manage the connected systems at run time allowing the implementation of autonomic features such as self-management, self-optimization, self-description and self-configuration. The proposed architecture also contains a meta-model that allows the representation of the Operating Context for hardware systems. This meta-model will be used as basis to the context sensing modules, that are needed in the Active Repository architecture. In order to demonstrate the proposed architecture functionalities, experiments were proposed and implemented in order to proof the Thesis hypothesis and achieved objectives. Three experiments were planned and implemented: the Hardware Reconfigurable Filter, that consists of an application that implements Digital Filters using reconfigurable hardware; the Autonomic Image Segmentation Filter, that shows the project and implementation of an image processing autonomic application; finally, the Autonomic Autopilot application that consist of an auto pilot to unmanned aerial vehicles. In this work, the applications architectures were organized in modules, according their functionalities. Some modules were implemented using HDL and synthesized in hardware. Other modules were implemented kept in software. After that, applications were integrated to the AHM to allow their adaptation to different Operating Context, making them autonomic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The real-time embedded systems design requires precise control of the passage of time in the computation performed by the modules and communication between them. Generally, these systems consist of several modules, each designed for a specific task and restricted communication with other modules in order to obtain the required timing. This strategy, called federated architecture, is already becoming unviable in front of the current demands of cost, required performance and quality of embedded system. To address this problem, it has been proposed the use of integrated architectures that consist of one or few circuits performing multiple tasks in parallel in a more efficient manner and with reduced costs. However, one has to ensure that the integrated architecture has temporal composability, ie the ability to design each task temporally isolated from the others in order to maintain the individual characteristics of each task. The Precision Timed Machines are an integrated architecture approach that makes use of multithreaded processors to ensure temporal composability. Thus, this work presents the implementation of a Precision Machine Timed named Hivek-RT. This processor which is a VLIW supporting Simultaneous Multithreading is capable of efficiently execute real-time tasks when compared to a traditional processor. In addition to the efficient implementation, the proposed architecture facilitates the implementation real-time tasks from a programming point of view.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work consists basically in the elaboration of an Artificial Neural Network (ANN) in order to model the composites materials’ behavior when submitted to fatigue loadings. The proposal is to develop and present a mixed model, which associate an analytical equation (Adam Equation) to the structure of the ANN. Given that the composites often shows a similar behavior when subject to float loadings, this equation aims to establish a pre-defined comparison pattern for a generic material, so that the ANN fit the behavior of another composite material to that pattern. In this way, the ANN did not need to fully learn the behavior of a determined material, because the Adam Equation would do the big part of the job. This model was used in two different network architectures, modular and perceptron, with the aim of analyze it efficiency in distinct structures. Beyond the different architectures, it was analyzed the answers generated from two sets of different data – with three and two SN curves. This model was also compared to the specialized literature results, which use a conventional structure of ANN. The results consist in analyze and compare some characteristics like generalization capacity, robustness and the Goodman Diagrams, developed by the networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diesel fuel is one of leading petroleum products marketed in Brazil, and has its quality monitored by specialized laboratories linked to the National Agency of Petroleum, Natural Gas and Biofuels - ANP. The main trial evaluating physicochemical properties of diesel are listed in the resolutions ANP Nº 65 of December 9th, 2011 and Nº 45 of December 20th, 2012 that determine the specification limits for each parameter and methodologies of analysis that should be adopted. However the methods used although quite consolidated, require dedicated equipment with high cost of acquisition and maintenance, as well as technical expertise for completion of these trials. Studies for development of more rapid alternative methods and lower cost have been the focus of many researchers. In this same perspective, this work conducted an assessment of the applicability of existing specialized literature on mathematical equations and artificial neural networks (ANN) for the determination of parameters of specification diesel fuel. 162 samples of diesel with a maximum sulfur content of 50, 500 and 1800 ppm, which were analyzed in a specialized laboratory using ASTM methods recommended by the ANP, with a total of 810 trials were used for this study. Experimental results atmospheric distillation (ASTM D86), and density (ASTM D4052) of diesel samples were used as basic input variables to the equations evaluated. The RNAs were applied to predict the flash point, cetane number and sulfur content (S50, S500, S1800), in which were tested network architectures feed-forward backpropagation and generalized regression varying the parameters of the matrix input in order to determine the set of variables and the best type of network for the prediction of variables of interest. The results obtained by the equations and RNAs were compared with experimental results using the nonparametric Wilcoxon test and Student's t test, at a significance level of 5%, as well as the coefficient of determination and percentage error, an error which was obtained 27, 61% for the flash point using a specific equation. The cetane number was obtained by three equations, and both showed good correlation coefficients, especially equation based on aniline point, with the lowest error of 0,816%. ANNs for predicting the flash point and the index cetane showed quite superior results to those observed with the mathematical equations, respectively, with errors of 2,55% and 0,23%. Among the samples with different sulfur contents, the RNAs were better able to predict the S1800 with error of 1,557%. Generally, networks of the type feedforward proved superior to generalized regression.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current and future applications pose new requirements that Internet architecture is not able to satisfy, like Mobility, Multicast, Multihoming, Bandwidth Guarantee and so on. The Internet architecture has some limitations which do not allow all future requirements to be covered. New architectures were proposed considering these requirements when a communication is established. ETArch (Entity Title Architecture) is a new Internet architecture, clean slate, able to use application’s requirements on each communication, and flexible to work with several layers. The Routing has an important role on Internet, because it decides the best way to forward primitives through the network. In Future Internet, all requirements depend on the routing. Routing is responsible for deciding the best path and, in the future, a better route can consider Mobility aspects or Energy Consumption, for instance. In the dawn of ETArch, the Routing has not been defined. This work provides intra and inter-domain routing algorithms to be used in the ETArch. It is considered that the route should be defined completely before the data start to traffic, to ensure that the requirements are met. In the Internet, the Routing has two distinct functions: (i) run specific algorithms to define the best route; and (ii) to forward data primitives to the correct link. In traditional Internet architecture, the two Routing functions are performed in all routers everytime that a packet arrives. This work allows that the complete route is defined before the communication starts, like in the telecommunication systems. This work determined the Routing for ETArch and experiments were performed to demonstrate the control plane routing viability. The initial setup before a communication takes longer, then only forwarding of primitives is performed, saving processing time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

lmage super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super­ resolution problems. lndeed, in arder to estimate an output image, we adopta mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already per- form well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in arder to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the- art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for recon- structing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the increasing fungi resistance compared with existing drugs on the market and the side effects reported by some compounds with antioxidant properties and enzymatic inhibitors, in particular against α-amylase and α-glucosidase, the discovery of new compounds with biological potential, becomes a need. In this context, natural products can be an important source for the discovery of new active molecular architectures. Then, this study aimed to evaluate the antioxidant activity, the enzymatic inhibitory activity of α-amylase and α-glucosidase, the antifungal and cytotoxic activities of ethanolic extract (EE) the leaves of Banisteriopsis argyrophylla (Malpighiaceae) and their fractions, obtained by liquid-liquid extraction using solvents of increasing polarity. The antioxidant activity was evaluated by the free radical DPPH scavenging method (2,2-diphenyl-1-picrylhydrazyl) and the ethyl acetate fractions (FAE) and n-butanol (FB) were the most active, confirmed by the peak current and the oxidation potential obtained by differential pulse voltammetry (DPV). The inhibitory activity of the α-amylase and α-glucosidase was analyzed considering the reactions between substrates α-(2-chloro-4-nitrophenyl)-β-1,4-galactopiranosilmaltoside (Gal-α-G2-CNP) and 4-nitrophenyl-α-D-glucopyranoside (p-NPG), respectively. Initially, it was found that the EE showed considerable activity against α-amylase (EC50 = 2.89±0.1 μg m L–1) compared to the acarbose used as positive control (EC50 = 0.08±0.1 μg mL–1) and that did not showed promising activity against the α-glucosidase. After this observation we evaluated the inhibitory activity of α-amylase fractions, with FAE (EC50 = 2.33±0.1 μg mL–1) and FB (EC50 = 2.57 ± 0.1 μg mL–1) showing the best inhibitions. The antifungal activity was evaluated against Candida species, and the FAE had better antifungal potential (MIC's between 93.75 and 11.72 μg mL–1) compared with amphotericin as positive standard (MIC = 1.00 and 2.00 μg L–1 for C. parapsilosis and C. krusei used as controls, respectively). The EE (CC50 = 360.00 ± 12 μg mL–1) and fractions (CC50's> 270.00 μg mL–1) were considerably less toxic to Vero cells than the cisplatin used as positive control (CC50 = 7.01 ± 0 6 μg mL–1). The FAE showed the best results for the activities studied, this fraction was submitted to ultra performance liquid chromatography coupled with mass spectrometry (UPLC-MS)), and the following flavonoids have been identified: (±)-catechin, quercetin-3-O-β-D-Glc/ quercetin-3-O-β-D-Gal, quercetin-3-O-β-L-Ara, quercetin-3-O-β-D-Xyl, quercetin-3-O-α-L-Rha, kaempferol-3-O-α-L-Rha, quercetin-3-O-(2''-galoil)-α-L-Rha, quercetin-3-O-(3''-galoil)-α-L-Rha and kaempferol-3-O-(3''-galoil)-α-L-Rha,. FAE was submitted to column chromatography using C18 phase, and (±)-catechin was isolated (FAE-A1, 73 mg) and three fractions consisting of a mixture of flavonoids were obtained (FAE-A2, FAE-A3 and FAE-A4). These compounds were identified by thin layer chromatography (TLC) and (–)-ESI-MS. The (±)-catechin fraction showed an MIC = 2.83 μg ml–1 in assay using C. glabrata, with amphotericin as positive control. The fractions FAE-A2, FAE-A3, FAE-A4, showed less antifungal potential in tested concentrations. The identified flavonoids are described in the literature, regarding their antioxidant capacity and (±)-catechin, quercetin-3-O-Rha and kaempferol-3-O-Rha are described as α-amylase inhibitors. Thus, B. argyrophylla is an important species that produces compounds with antioxidant potential that can be related to the traditional use as anti-inflammatory and also has antifungal compounds and inhibitors of α-amylase. Therefore, these leaves are promising resources for the production of new drugs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nos últimos anos temos assistido ao aparecimento de novos dispositivos inteligentes e sistemas de automação residencial. Com o objetivo de conectar e controlar equi- pamentos eletrónicos, dentro de uma habitação, existe a necessidade de desenvolver soluções integradas de controlo remoto. Os smartphones, apresentam todas as carac- terísticas, poder computacional e portabilidade, ideais para controlo e monitorização deste tipo de dispositivos. Quando se considera o desenvolvimento de um produto novo e dependendo do problema, escolher a abordagem e tecnologias mais adequadas nem sempre é fácil. Este projeto apresenta um conjunto de tecnologias e soluções de controlo baseadas em arquiteturas móveis. Para prova de conceito, e baseado num sistema real (IVIT), é proposta uma solução inovadora usando um smartphone. O IVIT, desenvolve uma tecnologia para aplicar num reservatório de aquecimento de água de inércia variável, com funcionalidades de monitorização, controlo e operação colaborativa do sistema de aquecimento. Os resultados obtidos comprovaram que o desempenho da aplicação ultrapassou as expectativas e que a solução proposta é uma alternativa viável para o controlo de dispositivos para automação residencial usando smartphones.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We measured the oxygen isotopic composition of planktonic and benthic foraminifera in three cores collected at key positions to reconstruct the paleoceanography of the Barents Sea: core ASV 880 on the path of the northern branch of Atlantic water inflowing from the Arctic Ocean, core ASV 1200 in the central basin near the polar front, and core ASV 1157 in the main area of brine formation. Modern seawater d18O measurements show that far from the coast, d18O variations are linearly linked to the salinity changes associated with sea ice melting. The foraminifer d18O records are dated by 14C measurements performed on mollusk shells, and they provide a detailed reconstruction of the paleoceanographic evolution of the Barents Sea during the Holocene. Four main steps were recognized: the terminal phase of the deglaciation with melting of the main glaciers, which were located on the surrounding continent and islands, the short thermal optimum from 7.8 ka B.P. to 6.8 ka B.P., a cold mid-Holocene phase with a large reduction of the inflow of Atlantic water, and the inception of the modern hydrological pattern by 4.7 ka B.P. Brine water formation was active during the whole Holocene. The paleoclimatic evolution of the Barents Sea was driven by both high-latitude summer insolation and the intensity of the Atlantic water inflow.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High-, i.e. 15-140-yr-resolution climate records from sediment cores 23071, 23074, and PS2644 from the Nordic Seas were used to recon:;truct changes in the surface and deep water circulation during marine isotope stages 1-5.1, i.e. the last 82 000 yr. From this the causal links between the paleoceanographic signals and the Dansgaard-Oeschger events 1-21 revealed in 0180-ice-core records from Greenland were determined. The stratigraphy of the cores is based on the planktic 0180 curves, the minima of which were directly correlated with the GISP2-0180 record, numerous AMS 14C ages, and some ash layers. The planktic d18O and dl3C curves of all three cores reveal numerous meltwater events, the most pronounced of which were assigned to the Heinrich events 1-6. The meltwater events, among other things also accompanied by cold sea surface temperatures and high IRD concentration, correlate with the stadial phases of the Dansgaard-Oeschger cycles and in the western Iceland Sea also to colder periods or abrupt drops in 0180 within a few longer interstadials. Besides being more numerous, the meltwater events also show isotope values lighter in the Iceland Sea than in the central Norwegian Sea, especially if compared to core 23071. This implies a continuous inflow of relative warm Atlantic water into the Norwegian Sea and a cyclonic circulation regime.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The last decades have been characterized by a continuous adoption of IT solutions in the healthcare sector, which resulted in the proliferation of tremendous amounts of data over heterogeneous systems. Distinct data types are currently generated, manipulated, and stored, in the several institutions where patients are treated. The data sharing and an integrated access to this information will allow extracting relevant knowledge that can lead to better diagnostics and treatments. This thesis proposes new integration models for gathering information and extracting knowledge from multiple and heterogeneous biomedical sources. The scenario complexity led us to split the integration problem according to the data type and to the usage specificity. The first contribution is a cloud-based architecture for exchanging medical imaging services. It offers a simplified registration mechanism for providers and services, promotes remote data access, and facilitates the integration of distributed data sources. Moreover, it is compliant with international standards, ensuring the platform interoperability with current medical imaging devices. The second proposal is a sensor-based architecture for integration of electronic health records. It follows a federated integration model and aims to provide a scalable solution to search and retrieve data from multiple information systems. The last contribution is an open architecture for gathering patient-level data from disperse and heterogeneous databases. All the proposed solutions were deployed and validated in real world use cases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent paradigms in wireless communication architectures describe environments where nodes present a highly dynamic behavior (e.g., User Centric Networks). In such environments, routing is still performed based on the regular packet-switched behavior of store-and-forward. Albeit sufficient to compute at least an adequate path between a source and a destination, such routing behavior cannot adequately sustain the highly nomadic lifestyle that Internet users are today experiencing. This thesis aims to analyse the impact of the nodes’ mobility on routing scenarios. It also aims at the development of forwarding concepts that help in message forwarding across graphs where nodes exhibit human mobility patterns, as is the case of most of the user-centric wireless networks today. The first part of the work involved the analysis of the mobility impact on routing, and we found that node mobility significance can affect routing performance, and it depends on the link length, distance, and mobility patterns of nodes. The study of current mobility parameters showed that they capture mobility partially. The routing protocol robustness to node mobility depends on the routing metric sensitivity to node mobility. As such, mobility-aware routing metrics were devised to increase routing robustness to node mobility. Two categories of routing metrics proposed are the time-based and spatial correlation-based. For the validation of the metrics, several mobility models were used, which include the ones that mimic human mobility patterns. The metrics were implemented using the Network Simulator tool using two widely used multi-hop routing protocols of Optimized Link State Routing (OLSR) and Ad hoc On Demand Distance Vector (AODV). Using the proposed metrics, we reduced the path re-computation frequency compared to the benchmark metric. This means that more stable nodes were used to route data. The time-based routing metrics generally performed well across the different node mobility scenarios used. We also noted a variation on the performance of the metrics, including the benchmark metric, under different mobility models, due to the differences in the node mobility governing rules of the models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hybridisation is a systematic process along which the characteristic features of hybrid logic, both at the syntactic and the semantic levels, are developed on top of an arbitrary logic framed as an institution. In a series of papers this process has been detailed and taken as a basis for a speci cation methodology for recon gurable systems. The present paper extends this work by showing how a proof calculus (in both a Hilbert and a tableau based format) for the hybridised version of a logic can be systematically generated from a proof calculus for the latter. Such developments provide the basis for a complete proof theory for hybrid(ised) logics, and thus pave the way to the development of (dedicated) proof support.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A constante evolução da tecnologia disponibilizou, atualmente, ferramentas computacionais que eram apenas expectativas há 10 anos atrás. O aumento do potencial computacional aplicado a modelos numéricos que simulam a atmosfera permitiu ampliar o estudo de fenômenos atmosféricos, através do uso de ferramentas de computação de alto desempenho. O trabalho propôs o desenvolvimento de algoritmos com base em arquiteturas SIMT e aplicação de técnicas de paralelismo com uso da ferramenta OpenACC para processamento de dados de previsão numérica do modelo Weather Research and Forecast. Esta proposta tem forte conotação interdisciplinar, buscando a interação entre as áreas de modelagem atmosférica e computação científica. Foram testadas a influência da computação do cálculo de microfísica de nuvens na degradação temporal do modelo. Como a entrada de dados para execução na GPU não era suficientemente grande, o tempo necessário para transferir dados da CPU para a GPU foi maior do que a execução da computação na CPU. Outro fator determinante foi a adição de código CUDA dentro de um contexto MPI, causando assim condições de disputa de recursos entre os processadores, mais uma vez degradando o tempo de execução. A proposta do uso de diretivas para aplicar computação de alto desempenho em uma estrutura CUDA parece muito promissora, mas ainda precisa ser utilizada com muita cautela a fim de produzir bons resultados. A construção de um híbrido MPI + CUDA foi testada, mas os resultados não foram conclusivos.