948 resultados para PROCESSING TECHNIQUE
Resumo:
The use of image processing techniques to assess the performance of airport landing lighting using images of it collected from an aircraft-mounted camera is documented. In order to assess the performance of the lighting, it is necessary to uniquely identify each luminaire within an image and then track the luminaires through the entire sequence and store the relevant information for each luminaire, that is, the total number of pixels that each luminaire covers and the total grey level of these pixels. This pixel grey level can then be used for performance assessment. The authors propose a robust model-based (MB) featurematching technique by which the performance is assessed. The development of this matching technique is the key to the automated performance assessment of airport lighting. The MB matching technique utilises projective geometry in addition to accurate template of the 3D model of a landing-lighting system. The template is projected onto the image data and an optimum match found, using nonlinear least-squares optimisation. The MB matching software is compared with standard feature extraction and tracking techniques known within the community, these being the Kanade–Lucus–Tomasi (KLT) and scaleinvariant feature transform (SIFT) techniques. The new MB matching technique compares favourably with the SIFT and KLT feature-tracking alternatives. As such, it provides a solid foundation to achieve the central aim of this research which is to automatically assess the performance of airport lighting.
Resumo:
BACKGROUND: Functional connectivity magnetic resonance imaging technique has revealed the importance of distributed network structures in higher cognitive processes in the human brain. The hippocampus has a key role in a distributed network supporting memory encoding and retrieval. Hippocampal dysfunction is a recurrent finding in memory disorders of aging such as amnestic mild cognitive impairment (aMCI) in which learning- and memory-related cognitive abilities are the predominant impairment. The functional connectivity method provides a novel approach in our attempts to better understand the changes occurring in this structure in aMCI patients. METHODS: Functional connectivity analysis was used to examine episodic memory retrieval networks in vivo in twenty 28 aMCI patients and 23 well-matched control subjects, specifically between the hippocampal structures and other brain regions. RESULTS: Compared with control subjects, aMCI patients showed significantly lower hippocampus functional connectivity in a network involving prefrontal lobe, temporal lobe, parietal lobe, and cerebellum, and higher functional connectivity to more diffuse areas of the brain than normal aging control subjects. In addition, those regions associated with increased functional connectivity with the hippocampus demonstrated a significantly negative correlation to episodic memory performance. CONCLUSIONS: aMCI patients displayed altered patterns of functional connectivity during memory retrieval. The degree of this disturbance appears to be related to level of impairment of processes involved in memory function. Because aMCI is a putative prodromal syndrome to Alzheimer's disease (AD), these early changes in functional connectivity involving the hippocampus may yield important new data to predict whether a patient will eventually develop AD.
Resumo:
This study investigates a model system for potential pharmaceutical materials in fluidised bed processes. In particular, this study proposes a novel use of Raman spectroscopy, which allows in situ measurement of the composition of the material within the fluidised bed in three spatial dimensions and as a function of time. This is achieved by recording Raman spectra from specific volumes of space. The work shows that Raman spectroscopy can be used to provide 3D maps of the concentration and chemical structure of the particles in a fluidised bed within a relatively short (120 s) time window. At the most basic level the technique measures particle density via the intensity of the Raman spectra, however this could be used. More importantly the data are also rich in spectroscopic information on the chemical structure of the fluidised particles which is useful either for monitoring a given granulation process or more generally for the analysis of the dynamics of the airflow if the data were incorporated into an appropriate model. The technique has the potential to give detailed in situ information on how the structure and composition of the granules/powders within the fluidised bed (dryer or granulator) vary with the position and evolve with time. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The highly structured nature of many digital signal processing operations allows these to be directly implemented as regular VLSI circuits. This feature has been successfully exploited in the design of a number of commercial chips, some examples of which are described. While many of the architectures on which such chips are based were originally derived on heuristic basis, there is an increasing interest in the development of systematic design techniques for the direct mapping of computations onto regular VLSI arrays. The purpose of this paper is to show how the the technique proposed by Kung can be readily extended to the design of VLSI signal processing chips where the organisation of computations at the level of individual data bits is of paramount importance. The technique in question allows architectures to be derived using the projection and retiming of data dependence graphs.
Resumo:
Biosignal measurement and processing is increasingly being deployed in ambulatory situations particularly in connected health applications. Such an environment dramatically increases the likelihood of artifacts which can occlude features of interest and reduce the quality of information available in the signal. If multichannel recordings are available for a given signal source, then there are currently a considerable range of methods which can suppress or in some cases remove the distorting effect of such artifacts. There are, however, considerably fewer techniques available if only a single-channel measurement is available and yet single-channel measurements are important where minimal instrumentation complexity is required. This paper describes a novel artifact removal technique for use in such a context. The technique known as ensemble empirical mode decomposition with canonical correlation analysis (EEMD-CCA) is capable of operating on single-channel measurements. The EEMD technique is first used to decompose the single-channel signal into a multidimensional signal. The CCA technique is then employed to isolate the artifact components from the underlying signal using second-order statistics. The new technique is tested against the currently available wavelet denoising and EEMD-ICA techniques using both electroencephalography and functional near-infrared spectroscopy data and is shown to produce significantly improved results. © 1964-2012 IEEE.
Resumo:
In the current investigation, rubber/clay nanocomposites were prepared by two different methods using hydrogenated nitrile butadiene rubber (HNBR) and the organoclay namely Cloisite 15A (C15A). A new novel approach involving swelling of C15A by ulltrasonication in HNBR solution has been carried out for improving the exfoliation and compatibilization of organoclays with HNBR matrix. With the addition of 5phr of clay, the elongation at break and tear strength improved by 16% and 24% respectively. The effect of coupling agents namely amino functional silane and tetrasulfido silane on the nanocomposites have been investigated. The elongation at break and tear strength improved by 46% and 77% respectively with the use of silanes. The improvement in the mechanical properties attributes to improved interaction between the organoclays and HNBR matrix. This interaction has been studied by X-ray diffraction and transmission electron microscope. Pre-dispersion technique clearly suggests very good improvement in the dispersion and properties due to better filler-rubber compatibility. © 2010 American Institute of Physics.
Resumo:
We address the propagation of a single photon pulse with two polarization components, i.e., a polarization qubit, in an inhomogeneously broadened "phaseonium" \Lambda-type three-level medium. We combine some of the non-trivial propagation effects characteristic for this kind of coherently prepared systems and the controlled reversible inhomogeneous broadening technique to propose several quantum information processing applications, such as a protocol for polarization qubit filtering and sieving as well as a tunable polarization beam splitter. Moreover, we show that, by imposing a spatial variation of the atomic coherence phase, an effcient quantum memory for the incident polarization qubit can be also implemented in \Lambda-type three-level systems.
Resumo:
With security and surveillance, there is an increasing need to be able to process image data efficiently and effectively either at source or in a large data networks. Whilst Field Programmable Gate Arrays have been seen as a key technology for enabling this, they typically use high level and/or hardware description language synthesis approaches; this provides a major disadvantage in terms of the time needed to design or program them and to verify correct operation; it considerably reduces the programmability capability of any technique based on this technology. The work here proposes a different approach of using optimised soft-core processors which can be programmed in software. In particular, the paper proposes a design tool chain for programming such processors that uses the CAL Actor Language as a starting point for describing an image processing algorithm and targets its implementation to these custom designed, soft-core processors on FPGA. The main purpose is to exploit the task and data parallelism in order to achieve the same parallelism as a previous HDL implementation but avoiding the design time, verification and debugging steps associated with such approaches.
Resumo:
The formulation of BCS Class II drugs as amorphous solid dispersions has been shown to provide advantages with respect to improving the aqueous solubility of these compounds. While hot melt extrusion (HME) and spray drying (SD) are among the most common methods for the production of amorphous solid dispersions (ASDs), the high temperatures often required for HME can restrict the processing of thermally labile drugs, while the use of toxic organic solvents during SD can impact on end-product toxicity. In this study, we investigated the potential of supercritical fluid impregnation (SFI) using carbon dioxide as an alternative process for ASD production of a model poorly water-soluble drug, indomethacin (INM). In doing so, we produced ASDs without the use of organic solvents and at temperatures considerably lower than those required for HME. Previous studies have concentrated on the characterization of ASDs produced using HME or SFI but have not considered both processes together. Dispersions were manufactured using two different polymers, Soluplus and polyvinylpyrrolidone K15 using both SFI and HME and characterized for drug morphology, homogeneity, presence of drug-polymer interactions, glass transition temperature, amorphous stability of the drug within the formulation, and nonsink drug release to measure the ability of each formulation to create a supersaturated drug solution. Fully amorphous dispersions were successfully produced at 50% w/w drug loading using HME and 30% w/w drug loading using SFI. For both polymers, formulations containing 50% w/w INM, manufactured via SFI, contained the drug in the γ-crystalline form. Interestingly, there were lower levels of crystallinity in PVP dispersions relative to SOL. FTIR was used to probe for the presence of drug-polymer interactions within both polymer systems. For PVP systems, the nature of these interactions depended upon processing method; however, for Soluplus formulations this was not the case. The area under the dissolution curve (AUC) was used as a measure of the time during which a supersaturated concentration could be maintained, and for all systems, SFI formulations performed better than similar HME formulations.
Resumo:
Given the growing interest in thermal processing methods, this study describes the use of an advanced rheological technique, capillary rheometry, to accurately determine the thermorheological properties of two pharmaceutical polymers, Eudragit E100 (E100) and hydroxypropylcellulose JF (HPC) and their blends, both in the presence and absence of a model therapeutic agent (quinine, as the base and hydrochloride salt). Furthermore, the glass transition temperatures (Tg) of the cooled extrudates produced using capillary rheometry were characterised using Dynamic Mechanical Thermal Analysis (DMTA) thereby enabling correlations to be drawn between the information derived from capillary rheometry and the glass transition properties of the extrudates. The shear viscosities of E100 and HPC (and their blends) decreased as functions of increasing temperature and shear rates, with the shear viscosity of E100 being significantly greater than that of HPC at all temperatures and shear rates. All platforms were readily processed at shear rates relevant to extrusion (approximately 200–300 s−1) and injection moulding (approximately 900 s−1). Quinine base was observed to lower the shear viscosities of E100 and E100/HPC blends during processing and the Tg of extrudates, indicative of plasticisation at processing temperatures and when cooled (i.e. in the solid state). Quinine hydrochloride (20% w/w) increased the shear viscosities of E100 and HPC and their blends during processing and did not affect the Tg of the parent polymer. However, the shear viscosities of these systems were not prohibitive to processing at shear rates relevant to extrusion and injection moulding. As the ratio of E100:HPC increased within the polymer blends the effects of quinine base on the lowering of both shear viscosity and Tg of the polymer blends increased, reflecting the greater solubility of quinine within E100. In conclusion, this study has highlighted the importance of capillary rheometry in identifying processing conditions, polymer miscibility and plasticisation phenomena.
Resumo:
Flow processing is a fundamental element of stateful traffic classification and it has been recognized as an essential factor for delivering today’s application-aware network operations and security services. The basic function within a flow processing engine is to search and maintain a flow table, create new flow entries if no entry matches and associate each entry with flow states and actions for future queries. Network state information on a per-flow basis must be managed in an efficient way to enable Ethernet frame transmissions at 40 Gbit/s (Gbps) and 100 Gbps in the near future. This paper presents a hardware solution of flow state management for implementing large-scale flow tables on popular computer memories using DDR3 SDRAMs. Working with a dedicated flow lookup table at over 90 million lookups per second, the proposed system is able to manage 512-bit state information at run time.