877 resultados para High-performance computing hyperspectral imaging


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compilation techniques such as those portrayed by the Warren Abstract Machine(WAM) have greatly improved the speed of execution of logic programs. The research presented herein is geared towards providing additional performance to logic programs through the use of parallelism, while preserving the conventional semantics of logic languages. Two áreas to which special attention is given are the preservation of sequential performance and storage efficiency, and the use of low overhead mechanisms for controlling parallel execution. Accordingly, the techniques used for supporting parallelism are efficient extensions of those which have brought high inferencing speeds to sequential implementations. At a lower level, special attention is also given to design and simulation detail and to the architectural implications of the execution model behavior. This paper offers an overview of the basic concepts and techniques used in the parallel design, simulation tools used, and some of the results obtained to date.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years a lot of research has been invested in parallel processing of numerical applications. However, parallel processing of Symbolic and AI applications has received less attention. This paper presents a system for parallel symbolic computitig, narned ACE, based on the logic programming paradigm. ACE is a computational model for the full Prolog language, capable of exploiting Or-parall< lism and Independent And-parallelism. In this paper vve focus on the implementation of the and-parallel part of the ACE system (ralled &ACE) on a shared memory multiprocessor, d< scribing its organization, some optimizations, and presenting some performance figures, proving the abilhy of &ACE to efficiently exploit parallelism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based on our needs, that is to say, through precise simulation of the impact phenomena that may occur inside a jet engine turbine with an explicit non-linear finite element code, four new material models are postulated. Each one of is calibrated for four high-performance alloys that can be encountered in a modern jet engine. A new uncoupled material model for high strain and ballistic is proposed. Based on a Johnson-Cook type model, the proposed formulation introduces the effect of the third deviatoric invariant by means of three different Lode angle dependent functions. The Lode dependent functions are added to both plasticity and failure models. The postulated model is calibrated for a 6061-T651 aluminium alloy with data taken from the literature. The fracture pattern predictability of the JCX material model is shown performing numerical simulations of various quasi-static and dynamic tests. As an extension of the above-mentioned model, a modification in the thermal softening behaviour due to phase transformation temperatures is developed (JCXt). Additionally, a Lode angle dependent flow stress is defined. Analysing the phase diagram and high temperature tests performed, phase transformation temperatures of the FV535 stainless steel are determined. The postulated material model constants for the FV535 stainless steel are calibrated. A coupled elastoplastic-damage material model for high strain and ballistic applications is presented (JCXd). A Lode angle dependent function is added to the equivalent plastic strain to failure definition of the Johnson-Cook failure criterion. The weakening in the elastic law and in the Johnson-Cook type constitutive relation implicitly introduces the Lode angle dependency in the elastoplastic behaviour. The material model is calibrated for precipitation hardened Inconel 718 nickel-base superalloy. The combination of a Lode angle dependent failure criterion with weakened constitutive equations is proven to predict fracture patterns of the mechanical tests performed and provide reliable results. A transversely isotropic material model for directionally solidified alloys is presented. The proposed yield function is based a single linear transformation of the stress tensor. The linear operator weighs the degree of anisotropy of the yield function. The elastic behaviour, as well as the hardening, are considered isotropic. To model the hardening, a Johnson-Cook type relation is adopted. A material vector is included in the model implementation. The failure is modelled with the Cockroft-Latham failure criterion. The material vector allows orienting the reference orientation in any other that the user may need. The model is calibrated for the MAR-M 247 directionally solidified nickel-base superalloy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Major ampullate (MA) dragline silk supports spider orb webs, combining strength and extensibility in the toughest biomaterial. MA silk evolved ~376 MYA and identifying how evolutionary changes in proteins influenced silk mechanics is crucial for biomimetics, but is hindered by high spinning plasticity. We use supercontraction to remove that variation and characterize MA silk across the spider phylogeny. We show that mechanical performance is conserved within, but divergent among, major lineages, evolving in correlation with discrete changes in proteins. Early MA silk tensile strength improved rapidly with the origin of GGX amino acid motifs and increased repetitiveness. Tensile strength then maximized in basal entelegyne spiders, ~230 MYA. Toughness subsequently improved through increased extensibility within orb spiders, coupled with the origin of a novel protein (MaSp2). Key changes in MA silk proteins therefore correlate with the sequential evolution high performance orb spider silk and could aid design of biomimetic fibers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mersenne Twister (MT) uniform random number generators are key cores for hardware acceleration of Monte Carlo simulations. In this work, two different architectures are studied: besides the classical table-based architecture, a different architecture based on a circular buffer and especially targeting FPGAs is proposed. A 30% performance improvement has been obtained when compared to the fastest previous work. The applicability of the proposed MT architectures has been proven in a high performance Gaussian RNG.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present research is focused on the application of hyperspectral images for the supervision of quality deterioration in ready to use leafy spinach during storage (Spinacia oleracea). Two sets of samples of packed leafy spinach were considered: (a) a first set of samples was stored at 20 °C (E-20) in order to accelerate the degradation process, and these samples were measured the day of reception in the laboratory and after 2 days of storage; (b) a second set of samples was kept at 10 °C (E-10), and the measurements were taken throughout storage, beginning the day of reception and repeating the acquisition of Images 3, 6 and 9 days later. Twenty leaves per test were analyzed. Hyperspectral images were acquired with a push-broom CCD camera equipped with a spectrograph VNIR (400–1000 nm). Calibration set of spectra was extracted from E-20 samples, containing three classes of degradation: class A (optimal quality), class B and class C (maximum deterioration). Reference average spectra were defined for each class. Three models, computed on the calibration set, with a decreasing degree of complexity were compared, according to their ability for segregating leaves at different quality stages (fresh, with incipient and non-visible symptoms of degradation, and degraded): spectral angle mapper distance (SAM), partial least squares discriminant analysis models (PLS-DA), and a non linear index (Leafy Vegetable Evolution, LEVE) combining five wavelengths were included among the previously selected by CovSel procedure. In sets E-10 and E-20, artificial images of the membership degree according to the distance of each pixel to the reference classes, were computed assigning each pixel to the closest reference class. The three methods were able to show the degradation of the leaves with storage time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dome-shaped Fresnel-Köhler concentrator is a novel optical design for photovoltaic applications. It is based on two previous successful CPV optical designs: the FK concentrator with a flat Fresnel lens and the dome-shaped Fresnel lens system developed by Daido Steel, resulting on a superior concentrator. This optical concentrator will be able to achieve large concentration factors, high tolerance (i.e. acceptance angle) and high optical efficiency, three key issues when dealing with photovoltaic applications. Besides, its irradiance is distributed on the cell surface in a very even way. The concentrator has shown outstanding simulation results, achieving an effective concentration-acceptance product (CAP) value of 0.72, on-axis optical efficiency over 85% and good irradiance uniformity on the cell provided by Köhler integration. Furthermore, due to its high tolerance, we will present the dome-shaped Fresnel-Köhler concentrator as a cost-effective CPV optical design. All this makes this concentrator superior to other conventional competitors in the current market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

- PV and HCPV compete in the utility market - PV cost reduction has been dramatic through volume - A complete off-the-shelf optics solution by Evonik and LPI - Based on the best-in-class design: The FK concentrator

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, the power management techniques implemented in a high-performance node for Wireless Sensor Networks (WSN) based on a RAM-based FPGA are presented. This new node custom architecture is intended for high-end WSN applications that include complex sensor management like video cameras, high compute demanding tasks such as image encoding or robust encryption, and/or higher data bandwidth needs. In the case of these complex processing tasks, yet maintaining low power design requirements, it can be shown that the combination of different techniques such as extensive HW algorithm mapping, smart management of power islands to selectively switch on and off components, smart and low-energy partial reconfiguration, an adequate set of save energy modes and wake up options, all combined, may yield energy results that may compete and improve energy usage of typical low power microcontrollers used in many WSN node architectures. Actually, results show that higher complexity tasks are in favor of HW based platforms, while the flexibility achieved by dynamic and partial reconfiguration techniques could be comparable to SW based solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantum Key Distribution is carving its place among the tools used to secure communications. While a difficult technology, it enjoys benefits that set it apart from the rest, the most prominent is its provable security based on the laws of physics. QKD requires not only the mastering of signals at the quantum level, but also a classical processing to extract a secret-key from them. This postprocessing has been customarily studied in terms of the efficiency, a figure of merit that offers a biased view of the performance of real devices. Here we argue that it is the throughput the significant magnitude in practical QKD, specially in the case of high speed devices, where the differences are more marked, and give some examples contrasting the usual postprocessing schemes with new ones from modern coding theory. A good understanding of its implications is very important for the design of modern QKD devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

- NIR Hyperspectral images (1000-2200 nm) allowed the detection of peanut traces down to adulteration percentages 0.01 % - Determination coefficient of R2= 0.946 was found for the quantification of peanut adulteration from 10% to 0.1%. - The obtained results shows the feasibility of using HSI systems for the detection of peanut traces in conjuction with chemical procedures, such as RT-PCR and ELISA to facilitate quality control surveyance on food product processing lines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As embedded systems evolve, problems inherent to technology become important limitations. In less than ten years, chips will exceed the maximum allowed power consumption affecting performance, since, even though the resources available per chip are increasing, frequency of operation has stalled. Besides, as the level of integration is increased, it is difficult to keep defect density under control, so new fault tolerant techniques are required. In this demo work, a new dynamically adaptable virtual architecture (ARTICo3) to allow dynamic and context-aware use of resources is implemented in a high performance Wireless Sensor node (HiReCookie) to perform an image processing application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents some power converter architectures and circuit topologies, which can be used to achieve the requirements of the high performance transformer rectifier unit in aircraft applications, mainly as: high power factor with low THD, high efficiency and high power density. The voltage and the power levels demanded for this application are: three-phase line-to-neutral input voltage of 115 or 230V AC rms (360 – 800Hz), output voltage of 28V DC or 270V DC(new grid value) and the output power up to tens of kilowatts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, Independent Components Analysis (ICA) has proven itself to be a powerful signal-processing technique for solving the Blind-Source Separation (BSS) problems in different scientific domains. In the present work, an application of ICA for processing NIR hyperspectral images to detect traces of peanut in wheat flour is presented. Processing was performed without a priori knowledge of the chemical composition of the two food materials. The aim was to extract the source signals of the different chemical components from the initial data set and to use them in order to determine the distribution of peanut traces in the hyperspectral images. To determine the optimal number of independent component to be extracted, the Random ICA by blocks method was used. This method is based on the repeated calculation of several models using an increasing number of independent components after randomly segmenting the matrix data into two blocks and then calculating the correlations between the signals extracted from the two blocks. The extracted ICA signals were interpreted and their ability to classify peanut and wheat flour was studied. Finally, all the extracted ICs were used to construct a single synthetic signal that could be used directly with the hyperspectral images to enhance the contrast between the peanut and the wheat flours in a real multi-use industrial environment. Furthermore, feature extraction methods (connected components labelling algorithm followed by flood fill method to extract object contours) were applied in order to target the spatial location of the presence of peanut traces. A good visualization of the distributions of peanut traces was thus obtained

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The postprocessing or secret-key distillation process in quantum key distribution (QKD) mainly involves two well-known procedures: information reconciliation and privacy amplification. Information or key reconciliation has been customarily studied in terms of efficiency. During this, some information needs to be disclosed for reconciling discrepancies in the exchanged keys. The leakage of information is lower bounded by a theoretical limit, and is usually parameterized by the reconciliation efficiency (or inefficiency), i.e. the ratio of additional information disclosed over the Shannon limit. Most techniques for reconciling errors in QKD try to optimize this parameter. For instance, the well-known Cascade (probably the most widely used procedure for reconciling errors in QKD) was recently shown to have an average efficiency of 1.05 at the cost of a high interactivity (number of exchanged messages). Modern coding techniques, such as rate-adaptive low-density parity-check (LDPC) codes were also shown to achieve similar efficiency values exchanging only one message, or even better values with few interactivity and shorter block-length codes.