981 resultados para powder processing
Resumo:
A co-precipitation process is utilized to manufacture Y2Cu2O5 precursor powders. Upon calcination at high temperatures, such as 800 degrees C, the co-precipitated powder transforms to Y2Cu2O5. By selective variation of calcination parameters, grain-growth can be controlled to yield different sized Y2Cu2O5 powder, including sub-micron average sizes. ICP analysis, X-ray diffraction, electron microscopy, a.c. magnetic susceptibility and FT Raman are used to characterize phase development, morphology and purity of the powders.
Resumo:
A co-precipitation process for large-scale manufacture of bismuth-based HTSC powders has been demonstrated. Powders manufactured by this process have a high phase purity and precisely reproducible stoichiometry. Controlled time and temperature variations are used to convert precursors to HTSC compounds and to obtain specific particle-size distributions. The process has been demonstrated for a variety of compositions in the BSCCO system. Electron microscopy X-ray diffraction, inductively coupled plasma spectroscopy and magnetic-susceptibility measurements are used to characterize the powders.
Resumo:
Quantities of Y2BaCuO5 powder greater than 500g have been manufactured by a co-precipitation process. By suitable heat treatments, the particle size of these powders can be varied from 5µm to less than 500nm. Sub-micrometer size powders may, under some conditions, have a duller green colour which is attributed to <2% unreacted material. However, after re-grinding and re-firing of this powder, high-purity powders can be achieved without significant grain growth. Inductively coupled plasma (ICP) spectroscopy is used to measure the stoichiometry of the powders and X-ray diffraction is used to determine phase purity. In both cases, the bulk composition is consistent with Y2BaCuO5 and phase purity is considered better than 95%.
Resumo:
utomatic pain monitoring has the potential to greatly improve patient diagnosis and outcomes by providing a continuous objective measure. One of the most promising methods is to do this via automatically detecting facial expressions. However, current approaches have failed due to their inability to: 1) integrate the rigid and non-rigid head motion into a single feature representation, and 2) incorporate the salient temporal patterns into the classification stage. In this paper, we tackle the first problem by developing a “histogram of facial action units” representation using Active Appearance Model (AAM) face features, and then utilize a Hidden Conditional Random Field (HCRF) to overcome the second issue. We show that both of these methods improve the performance on the task of pain detection in sequence level compared to current state-of-the-art-methods on the UNBC-McMaster Shoulder Pain Archive.
Resumo:
Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features.
Resumo:
Abstract: Texture enhancement is an important component of image processing, with extensive application in science and engineering. The quality of medical images, quantified using the texture of the images, plays a significant role in the routine diagnosis performed by medical practitioners. Previously, image texture enhancement was performed using classical integral order differential mask operators. Recently, first order fractional differential operators were implemented to enhance images. Experiments conclude that the use of the fractional differential not only maintains the low frequency contour features in the smooth areas of the image, but also nonlinearly enhances edges and textures corresponding to high-frequency image components. However, whilst these methods perform well in particular cases, they are not routinely useful across all applications. To this end, we applied the second order Riesz fractional differential operator to improve upon existing approaches of texture enhancement. Compared with the classical integral order differential mask operators and other fractional differential operators, our new algorithms provide higher signal to noise values, which leads to superior image quality.
Resumo:
Organizations make increasingly use of social media in order to compete for customer awareness and improve the quality of their goods and services. Multiple techniques of social media analysis are already in use. Nevertheless, theoretical underpinnings and a sound research agenda are still unavailable in this field at the present time. In order to contribute to setting up such an agenda, we introduce digital social signal processing (DSSP) as a new research stream in IS that requires multi-facetted investigations. Our DSSP concept is founded upon a set of four sequential activities: sensing digital social signals that are emitted by individuals on social media; decoding online data of social media in order to reconstruct digital social signals; matching the signals with consumers’ life events; and configuring individualized goods and service offerings tailored to the individual needs of customers. We further contribute to tying loose ends of different research areas together, in order to frame DSSP as a field for further investigation. We conclude with developing a research agenda.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
The selection of optimal camera configurations (camera locations, orientations, etc.) for multi-camera networks remains an unsolved problem. Previous approaches largely focus on proposing various objective functions to achieve different tasks. Most of them, however, do not generalize well to large scale networks. To tackle this, we propose a statistical framework of the problem as well as propose a trans-dimensional simulated annealing algorithm to effectively deal with it. We compare our approach with a state-of-the-art method based on binary integer programming (BIP) and show that our approach offers similar performance on small scale problems. However, we also demonstrate the capability of our approach in dealing with large scale problems and show that our approach produces better results than two alternative heuristics designed to deal with the scalability issue of BIP. Last, we show the versatility of our approach using a number of specific scenarios.
Resumo:
Background The size of the carrier influences the aerosolization of drug from a dry powder inhaler (DPI) formulation. Currently, lactose monohydrate particles in a variety of sizes are preferably used in carrier based DPI formulations of various drugs; however, contradictory reports exist regarding the effect of the size of the carrier on the dispersion of drug. In this study we examined the influence of the intrinsic particle size of the polymeric carrier on the aerosolization of a model drug salbutamol sulphate (SS). Methods Four different sizes (20–150 lm) of polymer carriers were fabricated using solvent evaporation technique and the dispersion of SS particles from these carriers was measured by a Twin Stage Impinger (TSI). The size and morphological properties of polymer carriers were by laser diffraction and SEM, respectively. Results The FPF from these carriers was found to be increasing from 5.6% to 21.3% with increasing the carrier size. The FPF was found to be greater (21%) with the highest particle size of the carrier (150 lm). Conclusions The aerosolization of drug was dependent on the size of polymer carriers. The smaller size of the carrier resulted in lower FPF which was increased with increasing the carrier size. For a fixed mass of drug particles in a formulation, the mass of drug particles per unit area of carriers is higher in formulations containing the larger carriers, which leads to an increase in the dispersion of drug due to the increased mechanical forces occurred between the carriers and the device walls.
Resumo:
Purpose: This study investigated the effect of chemical conjugation of the amino acid L-leucine to the polysaccharide chitosan on the dispersibility and drug release pattern of a polymeric nanoparticle (NP)-based controlled release dry powder inhaler (DPI) formulation. Methods: A chemical conjugate of L-leucine with chitosan was synthesized and characterized by Infrared (IR) Spectroscopy, Nuclear Magnetic Resonance (NMR) Spectroscopy, Elemental Analysis and X-ray Photoelectron Spectroscopy (XPS). Nanoparticles of both chitosan and its conjugate were prepared by a water-in-oil emulsification – glutaraldehyde cross-linking method using the antihypertensive agent, diltiazem (Dz) hydrochloride as the model drug. The surface morphology and particle size distribution of the nanoparticles were determined by Scanning Electron Microscopy (SEM) and Dynamic Light Scattering (DLS). The dispersibility of the nanoparticle formulation was analysed by a Twin Stage Impinger (TSI) with a Rotahaler as the DPI device. Deposition of the particles in the different stages was determined by gravimetry and the amount of drug released was analysed by UV spectrophotometry. The release profile of the drug was studied in phosphate buffered saline at 37 ⁰C and analyzed by UV spectrophotometry. Results: The TSI study revealed that the fine particle fractions (FPF), as determined gravimetrically, for empty and drug-loaded conjugate nanoparticles were significantly higher than for the corresponding chitosan nanoparticles (24±1.2% and 21±0.7% vs 19±1.2% and 15±1.5% respectively; n=3, p<0.05). The FPF of drug-loaded chitosan and conjugate nanoparticles, in terms of the amount of drug determined spectrophotometrically, had similar values (21±0.7% vs 16±1.6%). After an initial burst, both chitosan and conjugate nanoparticles showed controlled release that lasted about 8 to 10 days, but conjugate nanoparticles showed twice as much total drug release compared to chitosan nanoparticles (~50% vs ~25%). Conjugate nanoparticles also showed significantly higher dug loading and entrapment efficiency than chitosan nanoparticles (conjugate: 20±1% & 46±1%, chitosan: 16±1% & 38±1%, n=3, p<0.05). Conclusion: Although L-leucine conjugation to chitosan increased dispersibility of formulated nanoparticles, the FPF values are still far from optimum. The particles showed a high level of initial burst release (chitosan, 16% and conjugate, 31%) that also will need further optimization.
Resumo:
The diagnostics of mechanical components operating in transient conditions is still an open issue, in both research and industrial field. Indeed, the signal processing techniques developed to analyse stationary data are not applicable or are affected by a loss of effectiveness when applied to signal acquired in transient conditions. In this paper, a suitable and original signal processing tool (named EEMED), which can be used for mechanical component diagnostics in whatever operating condition and noise level, is developed exploiting some data-adaptive techniques such as Empirical Mode Decomposition (EMD), Minimum Entropy Deconvolution (MED) and the analytical approach of the Hilbert transform. The proposed tool is able to supply diagnostic information on the basis of experimental vibrations measured in transient conditions. The tool has been originally developed in order to detect localized faults on bearings installed in high speed train traction equipments and it is more effective to detect a fault in non-stationary conditions than signal processing tools based on spectral kurtosis or envelope analysis, which represent until now the landmark for bearings diagnostics.
Resumo:
The signal processing techniques developed for the diagnostics of mechanical components operating in stationary conditions are often not applicable or are affected by a loss of effectiveness when applied to signals measured in transient conditions. In this chapter, an original signal processing tool is developed exploiting some data-adaptive techniques such as Empirical Mode Decomposition, Minimum Entropy Deconvolution and the analytical approach of the Hilbert transform. The tool has been developed to detect localized faults on bearings of traction systems of high speed trains and it is more effective to detect a fault in non-stationary conditions than signal processing tools based on envelope analysis or spectral kurtosis, which represent until now the landmark for bearings diagnostics.