877 resultados para High-performance computing hyperspectral imaging
Resumo:
The Optical, Spectroscopic, and Infrared Remote Imaging System OSIRIS is the scientific camera system onboard the Rosetta spacecraft (Figure 1). The advanced high performance imaging system will be pivotal for the success of the Rosetta mission. OSIRIS will detect 67P/Churyumov-Gerasimenko from a distance of more than 106 km, characterise the comet shape and volume, its rotational state and find a suitable landing spot for Philae, the Rosetta lander. OSIRIS will observe the nucleus, its activity and surroundings down to a scale of ~2 cm px−1. The observations will begin well before the onset of cometary activity and will extend over months until the comet reaches perihelion. During the rendezvous episode of the Rosetta mission, OSIRIS will provide key information about the nature of cometary nuclei and reveal the physics of cometary activity that leads to the gas and dust coma. OSIRIS comprises a high resolution Narrow Angle Camera (NAC) unit and a Wide Angle Camera (WAC) unit accompanied by three electronics boxes. The NAC is designed to obtain high resolution images of the surface of comet 7P/Churyumov-Gerasimenko through 12 discrete filters over the wavelength range 250–1000 nm at an angular resolution of 18.6 μrad px−1. The WAC is optimised to provide images of the near-nucleus environment in 14 discrete filters at an angular resolution of 101 μrad px−1. The two units use identical shutter, filter wheel, front door, and detector systems. They are operated by a common Data Processing Unit. The OSIRIS instrument has a total mass of 35 kg and is provided by institutes from six European countries
Resumo:
As demonstrated by anatomical and physiological studies, the cerebral cortex consists of groups of cortical modules, each comprising populations of neurons with similar functional properties. This functional modularity exists in both sensory and association neocortices. However, the role of such cortical modules in perceptual and cognitive behavior is unknown. To aid in the examination of this issue we have applied the high spatial resolution optical imaging methodology to the study of awake, behaving animals. In this paper, we report the optical imaging of orientation domains and blob structures, approximately 100–200 μm in size, in visual cortex of the awake and behaving monkey. By overcoming the spatial limitations of other existing imaging methods, optical imaging will permit the study of a wide variety of cortical functions at the columnar level, including motor and cognitive functions traditionally studied with positron-emission tomography or functional MRI techniques.
Resumo:
Considerable evidence exists to support the hypothesis that the hippocampus and related medial temporal lobe structures are crucial for the encoding and storage of information in long-term memory. Few human imaging studies, however, have successfully shown signal intensity changes in these areas during encoding or retrieval. Using functional magnetic resonance imaging (fMRI), we studied normal human subjects while they performed a novel picture encoding task. High-speed echo-planar imaging techniques evaluated fMRI signal changes throughout the brain. During the encoding of novel pictures, statistically significant increases in fMRI signal were observed bilaterally in the posterior hippocampal formation and parahippocampal gyrus and in the lingual and fusiform gyri. To our knowledge, this experiment is the first fMRI study to show robust signal changes in the human hippocampal region. It also provides evidence that the encoding of novel, complex pictures depends upon an interaction between ventral cortical regions, specialized for object vision, and the hippocampal formation and parahippocampal gyrus, specialized for long-term memory.
Resumo:
Feature vectors can be anything from simple surface normals to more complex feature descriptors. Feature extraction is important to solve various computer vision problems: e.g. registration, object recognition and scene understanding. Most of these techniques cannot be computed online due to their complexity and the context where they are applied. Therefore, computing these features in real-time for many points in the scene is impossible. In this work, a hardware-based implementation of 3D feature extraction and 3D object recognition is proposed to accelerate these methods and therefore the entire pipeline of RGBD based computer vision systems where such features are typically used. The use of a GPU as a general purpose processor can achieve considerable speed-ups compared with a CPU implementation. In this work, advantageous results are obtained using the GPU to accelerate the computation of a 3D descriptor based on the calculation of 3D semi-local surface patches of partial views. This allows descriptor computation at several points of a scene in real-time. Benefits of the accelerated descriptor have been demonstrated in object recognition tasks. Source code will be made publicly available as contribution to the Open Source Point Cloud Library.
Resumo:
In many classification problems, it is necessary to consider the specific location of an n-dimensional space from which features have been calculated. For example, considering the location of features extracted from specific areas of a two-dimensional space, as an image, could improve the understanding of a scene for a video surveillance system. In the same way, the same features extracted from different locations could mean different actions for a 3D HCI system. In this paper, we present a self-organizing feature map able to preserve the topology of locations of an n-dimensional space in which the vector of features have been extracted. The main contribution is to implicitly preserving the topology of the original space because considering the locations of the extracted features and their topology could ease the solution to certain problems. Specifically, the paper proposes the n-dimensional constrained self-organizing map preserving the input topology (nD-SOM-PINT). Features in adjacent areas of the n-dimensional space, used to extract the feature vectors, are explicitly in adjacent areas of the nD-SOM-PINT constraining the neural network structure and learning. As a study case, the neural network has been instantiate to represent and classify features as trajectories extracted from a sequence of images into a high level of semantic understanding. Experiments have been thoroughly carried out using the CAVIAR datasets (Corridor, Frontal and Inria) taken into account the global behaviour of an individual in order to validate the ability to preserve the topology of the two-dimensional space to obtain high-performance classification for trajectory classification in contrast of non-considering the location of features. Moreover, a brief example has been included to focus on validate the nD-SOM-PINT proposal in other domain than the individual trajectory. Results confirm the high accuracy of the nD-SOM-PINT outperforming previous methods aimed to classify the same datasets.
Resumo:
In today's internet world, web browsers are an integral part of our day-to-day activities. Therefore, web browser security is a serious concern for all of us. Browsers can be breached in different ways. Because of the over privileged access, extensions are responsible for many security issues. Browser vendors try to keep safe extensions in their official extension galleries. However, their security control measures are not always effective and adequate. The distribution of unsafe extensions through different social engineering techniques is also a very common practice. Therefore, before installation, users should thoroughly analyze the security of browser extensions. Extensions are not only available for desktop browsers, but many mobile browsers, for example, Firefox for Android and UC browser for Android, are also furnished with extension features. Mobile devices have various resource constraints in terms of computational capabilities, power, network bandwidth, etc. Hence, conventional extension security analysis techniques cannot be efficiently used by end users to examine mobile browser extension security issues. To overcome the inadequacies of the existing approaches, we propose CLOUBEX, a CLOUd-based security analysis framework for both desktop and mobile Browser EXtensions. This framework uses a client-server architecture model. In this framework, compute-intensive security analysis tasks are generally executed in a high-speed computing server hosted in a cloud environment. CLOUBEX is also enriched with a number of essential features, such as client-side analysis, requirements-driven analysis, high performance, and dynamic decision making. At present, the Firefox extension ecosystem is most susceptible to different security attacks. Hence, the framework is implemented for the security analysis of the Firefox desktop and Firefox for Android mobile browser extensions. A static taint analysis is used to identify malicious information flows in the Firefox extensions. In CLOUBEX, there are three analysis modes. A dynamic decision making algorithm assists us to select the best option based on some important parameters, such as the processing speed of a client device and network connection speed. Using the best analysis mode, performance and power consumption are improved significantly. In the future, this framework can be leveraged for the security analysis of other desktop and mobile browser extensions, too.
Resumo:
Historic records of α-dicarbonyls (glyoxal, methylglyoxal), carboxylic acids (C6–C12 dicarboxylic acids, pinic acid, p-hydroxybenzoic acid, phthalic acid, 4-methylphthalic acid), and ions (oxalate, formate, calcium) were determined with annual resolution in an ice core from Grenzgletscher in the southern Swiss Alps, covering the time period from 1942 to 1993. Chemical analysis of the organic compounds was conducted using ultra-high-performance liquid chromatography (UHPLC) coupled to electrospray ionization high-resolution mass spectrometry (ESI-HRMS) for dicarbonyls and long-chain carboxylic acids and ion chromatography for short-chain carboxylates. Long-term records of the carboxylic acids and dicarbonyls, as well as their source apportionment, are reported for western Europe. This is the first study comprising long-term trends of dicarbonyls and long-chain dicarboxylic acids (C6–C12) in Alpine precipitation. Source assignment of the organic species present in the ice core was performed using principal component analysis. Our results suggest biomass burning, anthropogenic emissions, and transport of mineral dust to be the main parameters influencing the concentration of organic compounds. Ice core records of several highly correlated compounds (e.g., p-hydroxybenzoic acid, pinic acid, pimelic, and suberic acids) can be related to the forest fire history in southern Switzerland. P-hydroxybenzoic acid was found to be the best organic fire tracer in the study area, revealing the highest correlation with the burned area from fires. Historical records of methylglyoxal, phthalic acid, and dicarboxylic acids adipic acid, sebacic acid, and dodecanedioic acid are comparable with that of anthropogenic emissions of volatile organic compounds (VOCs). The small organic acids, oxalic acid and formic acid, are both highly correlated with calcium, suggesting their records to be affected by changing mineral dust transport to the drilling site.
Resumo:
The influence of an organically modified clay on the curing behavior of three epoxy systems widely used in the aerospace industry and of different structures and functionalities, was studied. Diglycidyl ether of bisphenol A (DGEBA), triglycidyl p-amino phenol (TGAP) and tetraglycidyl diamino diphenylmethane (TGDDM) were mixed with an octadecyl ammonium ion modified organoclay and cured with diethyltoluene diamine (DETDA). The techniques of dynamic mechanical thermal analysis (DMTA), chemorheology and differential scanning calorimetry (DSC) were applied to investigate gelation and vitrification behavior, as well as catalytic effects of the clay on resin cure. While the formation of layered silicate nanocomposite based on the bifunctional DGEBA resin has been previously investigated to some extent, this paper represents the first detailed study of the cure behavior of different high performance, epoxy nanocomposite systems.
Resumo:
The precise evaluation of electromagnetic field (EMF) distributions inside biological samples is becoming an increasingly important design requirement for high field MRI systems. In evaluating the induced fields caused by magnetic field gradients and RF transmitter coils, a multilayered dielectric spherical head model is proposed to provide a better understanding of electromagnetic interactions when compared to a traditional homogeneous head phantom. This paper presents Debye potential (DP) and Dyadic Green's function (DGF)-based solutions of the EMFs inside a head-sized, stratified sphere with similar radial conductivity and permittivity profiles as a human head. The DP approach is formulated for the symmetric case in which the source is a circular loop carrying a harmonic-formed current over a wide frequency range. The DGF method is developed for generic cases in which the source may be any kind of RF coil whose current distribution can be evaluated using the method of moments. The calculated EMFs can then be used to deduce MRI imaging parameters. The proposed methods, while not representing the full complexity of a head model, offer advantages in rapid prototyping as the computation times are much lower than a full finite difference time domain calculation using a complex head model. Test examples demonstrate the capability of the proposed models/methods. It is anticipated that this model will be of particular value for high field MRI applications, especially the rapid evaluation of RF resonator (surface and volume coils) and high performance gradient set designs.
Resumo:
In recent years many real time applications need to handle data streams. We consider the distributed environments in which remote data sources keep on collecting data from real world or from other data sources, and continuously push the data to a central stream processor. In these kinds of environments, significant communication is induced by the transmitting of rapid, high-volume and time-varying data streams. At the same time, the computing overhead at the central processor is also incurred. In this paper, we develop a novel filter approach, called DTFilter approach, for evaluating the windowed distinct queries in such a distributed system. DTFilter approach is based on the searching algorithm using a data structure of two height-balanced trees, and it avoids transmitting duplicate items in data streams, thus lots of network resources are saved. In addition, theoretical analysis of the time spent in performing the search, and of the amount of memory needed is provided. Extensive experiments also show that DTFilter approach owns high performance.
Resumo:
This thesis deals with the challenging problem of designing systems able to perceive objects in underwater environments. In the last few decades research activities in robotics have advanced the state of art regarding intervention capabilities of autonomous systems. State of art in fields such as localization and navigation, real time perception and cognition, safe action and manipulation capabilities, applied to ground environments (both indoor and outdoor) has now reached such a readiness level that it allows high level autonomous operations. On the opposite side, the underwater environment remains a very difficult one for autonomous robots. Water influences the mechanical and electrical design of systems, interferes with sensors by limiting their capabilities, heavily impacts on data transmissions, and generally requires systems with low power consumption in order to enable reasonable mission duration. Interest in underwater applications is driven by needs of exploring and intervening in environments in which human capabilities are very limited. Nowadays, most underwater field operations are carried out by manned or remotely operated vehicles, deployed for explorations and limited intervention missions. Manned vehicles, directly on-board controlled, expose human operators to risks related to the stay in field of the mission, within a hostile environment. Remotely Operated Vehicles (ROV) currently represent the most advanced technology for underwater intervention services available on the market. These vehicles can be remotely operated for long time but they need support from an oceanographic vessel with multiple teams of highly specialized pilots. Vehicles equipped with multiple state-of-art sensors and capable to autonomously plan missions have been deployed in the last ten years and exploited as observers for underwater fauna, seabed, ship wrecks, and so on. On the other hand, underwater operations like object recovery and equipment maintenance are still challenging tasks to be conducted without human supervision since they require object perception and localization with much higher accuracy and robustness, to a degree seldom available in Autonomous Underwater Vehicles (AUV). This thesis reports the study, from design to deployment and evaluation, of a general purpose and configurable platform dedicated to stereo-vision perception in underwater environments. Several aspects related to the peculiar environment characteristics have been taken into account during all stages of system design and evaluation: depth of operation and light conditions, together with water turbidity and external weather, heavily impact on perception capabilities. The vision platform proposed in this work is a modular system comprising off-the-shelf components for both the imaging sensors and the computational unit, linked by a high performance ethernet network bus. The adopted design philosophy aims at achieving high flexibility in terms of feasible perception applications, that should not be as limited as in case of a special-purpose and dedicated hardware. Flexibility is required by the variability of underwater environments, with water conditions ranging from clear to turbid, light backscattering varying with daylight and depth, strong color distortion, and other environmental factors. Furthermore, the proposed modular design ensures an easier maintenance and update of the system over time. Performance of the proposed system, in terms of perception capabilities, has been evaluated in several underwater contexts taking advantage of the opportunity offered by the MARIS national project. Design issues like energy power consumption, heat dissipation and network capabilities have been evaluated in different scenarios. Finally, real-world experiments, conducted in multiple and variable underwater contexts, including open sea waters, have led to the collection of several datasets that have been publicly released to the scientific community. The vision system has been integrated in a state of the art AUV equipped with a robotic arm and gripper, and has been exploited in the robot control loop to successfully perform underwater grasping operations.
Resumo:
The physical implementation of quantum information processing is one of the major challenges of current research. In the last few years, several theoretical proposals and experimental demonstrations on a small number of qubits have been carried out, but a quantum computing architecture that is straightforwardly scalable, universal, and realizable with state-of-the-art technology is still lacking. In particular, a major ultimate objective is the construction of quantum simulators, yielding massively increased computational power in simulating quantum systems. Here we investigate promising routes towards the actual realization of a quantum computer, based on spin systems. The first one employs molecular nanomagnets with a doublet ground state to encode each qubit and exploits the wide chemical tunability of these systems to obtain the proper topology of inter-qubit interactions. Indeed, recent advances in coordination chemistry allow us to arrange these qubits in chains, with tailored interactions mediated by magnetic linkers. These act as switches of the effective qubit-qubit coupling, thus enabling the implementation of one- and two-qubit gates. Molecular qubits can be controlled either by uniform magnetic pulses, either by local electric fields. We introduce here two different schemes for quantum information processing with either global or local control of the inter-qubit interaction and demonstrate the high performance of these platforms by simulating the system time evolution with state-of-the-art parameters. The second architecture we propose is based on a hybrid spin-photon qubit encoding, which exploits the best characteristic of photons, whose mobility is exploited to efficiently establish long-range entanglement, and spin systems, which ensure long coherence times. The setup consists of spin ensembles coherently coupled to single photons within superconducting coplanar waveguide resonators. The tunability of the resonators frequency is exploited as the only manipulation tool to implement a universal set of quantum gates, by bringing the photons into/out of resonance with the spin transition. The time evolution of the system subject to the pulse sequence used to implement complex quantum algorithms has been simulated by numerically integrating the master equation for the system density matrix, thus including the harmful effects of decoherence. Finally a scheme to overcome the leakage of information due to inhomogeneous broadening of the spin ensemble is pointed out. Both the proposed setups are based on state-of-the-art technological achievements. By extensive numerical experiments we show that their performance is remarkably good, even for the implementation of long sequences of gates used to simulate interesting physical models. Therefore, the here examined systems are really promising buildingblocks of future scalable architectures and can be used for proof-of-principle experiments of quantum information processing and quantum simulation.
Resumo:
A low cost interrogation scheme is demonstrated for a refractometer based on an in-line fiber long period grating (LPG) Mach–Zehnder interferometer. Using this interrogation scheme the minimum detectable change in refractive index of ?n ~ 1.8×10-6 is obtained, which is the highest resolution achieved using a fiber LPG device, and is comparable to precision techniques used in the industry including high performance liquid chromatography and ultraviolet spectroscopy.
Resumo:
The thesis describes an investigation into methods for the design of flexible high-speed product processing machinery, consisting of independent electromechanically actuated machine functions which operate under software coordination and control. An analysis is made of the elements of traditionally designed cam-actuated, mechanically coupled machinery, so that the operational functions and principal performance limitations of the separate machine elements may be identified. These are then used to define the requirements for independent actuators machinery, with a discussion of how this type of design approach is more suited to modern manufacturing trends. A distributed machine controller topology is developed which is a hybrid of hierarchical and pipeline control. An analysis is made, with the aid of dynamic simulation modelling, which confirms the suitability of the controller for flexible machinery control. The simulations include complex models of multiple independent actuators systems, which enable product flow and failure analyses to be performed. An analysis is made of high performance brushless d.c. servomotors and their suitability for actuating machine motions is assessed. Procedures are developed for the selection of brushless servomotors for intermittent machine motions. An experimental rig is described which has enabled the actuation and control methods developed to be implemented. With reference to this, an evaluation is made of the suitability of the machine design method and a discussion is given of the developments which are necessary for operational independent actuators machinery to be attained.
Resumo:
The effect of organically modified clay on the morphology, rheology and mechanical properties of high-density polyethylene (HDPE) and polyamide 6 (PA6) blends (HDPE/PA6 = 75/25 parts) is studied. Virgin and filled blends were prepared by melt compounding the constituents using a twin-screw extruder. The influence of the organoclay on the morphology of the hybrid was deeply investigated by means of wide-angle X-ray diffractometry, transmission and scanning electron microscopies and quantitative extraction experiments. It has been found that the organoclay exclusively places inside the more hydrophilic polyamide phase during the melt compounding. The extrusion process promotes the formation of highly elongated and separated organoclay-rich PA6 domains. Despite its low volume fraction, the filled minor phase eventually merges once the extruded pellets are melted again, giving rise to a co-continuous microstructure. Remarkably, such a morphology persists for long time in the melt state. A possible compatibilizing action related to the organoclay has been investigated by comparing the morphology of the hybrid blend with that of a blend compatibilized using an ethylene–acrylic acid (EAA) copolymer as a compatibilizer precursor. The former remains phase separated, indicating that the filler does not promote the enhancement of the interfacial adhesion. The macroscopic properties of the hybrid blend were interpreted in the light of its morphology. The melt state dynamics of the materials were probed by means of linear viscoelastic measurements. Many peculiar rheological features of polymer-layered silicate nanocomposites based on single polymer matrix were detected for the hybrid blend. The results have been interpreted proposing the existence of two distinct populations of dynamical species: HDPE not interacting with the filler, and a slower species, constituted by the organoclay-rich polyamide phase, which slackened dynamics stabilize the morphology in the melt state. In the solid state, both the reinforcement effect of the filler and the co-continuous microstructure promote the enhancement of the tensile modulus. Our results demonstrate that adding nanoparticles to polymer blends allows tailoring the final properties of the hybrid, potentially leading to high-performance materials which combine the advantages of polymer blends and the merits of polymer nanocomposites.