951 resultados para Single system image
Resumo:
Bovine intestine samples were heat pump fluidized bed dried at atmospheric pressure and at temperatures below and above the material freezing points equipped with a continuous monitoring system. The investigation of the drying characteristics has been conducted in the temperature range -10~25oC and the airflow in the range 1.5~2.5 m/s. Some experiments were conducted as a single temperature drying experiments and others as two stage drying experiments employing two temperatures. An Arrhenius-type equation was used to interpret the influence of the drying air parameters on the effective diffusivity, calculated with the method of slopes in terms of energy activation, and this was found to be sensitivity of the temperature. The effective diffusion coefficient of moisture transfer was determined by Fickian method using uni-dimensional moisture movement in both moisture, removal by evaporation and combined sublimation and evaporation. Correlations expressing the effective moisture diffusivity and drying temperature are reported.
Resumo:
Mixtures of single odours were used to explore the receptor response profile across individual antennae of Helicoverpa armigera (Hübner) (Lepidoptera: Noctuidae). Seven odours were tested including floral and green-leaf volatiles: phenyl acetaldehyde, benzaldehyde, β-caryophyllene, limonene, α-pinene, 1-hexanol, 3Z-hexenyl acetate. Electroantennograms of responses to paired mixtures of odours showed that there was considerable variation in receptor tuning across the receptor field between individuals. Data from some moth antennae showed no additivity, which indicated a restricted receptor profile. Results from other moth antennae to the same odour mixtures showed a range of partial additivity. This indicated that a wider array of receptor types was present in these moths, with a greater percentage of the receptors tuned exclusively to each odour. Peripheral receptor fields show variation in the spectrum of response within a population (of moths) when exposed to high doses of plant volatiles. This may be related to recorded variation in host choice within moth populations as reported by other authors.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
An influenza virus-inspired polymer mimic nanocarrier was used to deliver siRNA for specific and near complete gene knockdown of an osteoscarcom cell line (U-2SO). The polymer was synthesized by single-electron transfer living radical polymerization (SET-LRP) at room temperature to avoid complexities of transfer to monomer or polymer. It was the only LRP method that allowed good block copolymer formation with a narrow molecular weight distribution. At nitrogen to phosphorus (N/P) ratios of equal to or greater than 20 (greater than a polymer concentration of 13.8 μg/mL) with polo-like kinase 1 (PLK1) siRNA gave specific and near complete (>98%) cell death. The polymer further degrades to a benign polymer that showed no toxicity even at polymer concentrations of 200 μg/mL (or N/P ratio of 300), suggesting that our polymer nanocarrier can be used as a very effective siRNA delivery system and in a multiple dose administration. This work demonstrates that with a well-designed delivery device, siRNA can specifically kill cells without the inclusion of an additional clinically used highly toxic cochemotherapeutic agent. Our work also showed that this excellent delivery is sensitive for the study of off-target knockdown of siRNA.
Resumo:
Migraine is a common genetically linked neurovascular disorder. Approximately ~12% of the Caucasian population are affected including 18% of adult women and 6% of adult men (1, 2). A notable female bias is observed in migraine prevalence studies with females affected ~3 times more than males and is credited to differences in hormone levels arising from reproductive achievements. Migraine is extremely debilitating with wide-ranging socioeconomic impact significantly affecting people's health and quality of life. A number of neurotransmitter systems have been implicated in migraine, the most studied include the serotonergic and dopaminergic systems. Extensive genetic research has been carried out to identify genetic variants that may alter the activity of a number of genes involved in synthesis and transport of neurotransmitters of these systems. The biology of the Glutamatergic system in migraine is the least studied however there is mounting evidence that its constituents could contribute to migraine. The discovery of antagonists that selectively block glutamate receptors has enabled studies on the physiologic role of glutamate, on one hand, and opened new perspectives pertaining to the potential therapeutic applications of glutamate receptor antagonists in diverse neurologic diseases. In this brief review, we discuss the biology of the Glutamatergic system in migraine outlining recent findings that support a role for altered Glutamatergic neurotransmission from biochemical and genetic studies in the manifestation of migraine and the implications of this on migraine treatment.
Resumo:
Automated crowd counting has become an active field of computer vision research in recent years. Existing approaches are scene-specific, as they are designed to operate in the single camera viewpoint that was used to train the system. Real world camera networks often span multiple viewpoints within a facility, including many regions of overlap. This paper proposes a novel scene invariant crowd counting algorithm that is designed to operate across multiple cameras. The approach uses camera calibration to normalise features between viewpoints and to compensate for regions of overlap. This compensation is performed by constructing an 'overlap map' which provides a measure of how much an object at one location is visible within other viewpoints. An investigation into the suitability of various feature types and regression models for scene invariant crowd counting is also conducted. The features investigated include object size, shape, edges and keypoints. The regression models evaluated include neural networks, K-nearest neighbours, linear and Gaussian process regresion. Our experiments demonstrate that accurate crowd counting was achieved across seven benchmark datasets, with optimal performance observed when all features were used and when Gaussian process regression was used. The combination of scene invariance and multi camera crowd counting is evaluated by training the system on footage obtained from the QUT camera network and testing it on three cameras from the PETS 2009 database. Highly accurate crowd counting was observed with a mean relative error of less than 10%. Our approach enables a pre-trained system to be deployed on a new environment without any additional training, bringing the field one step closer toward a 'plug and play' system.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
This paper proposes a new controller for the excitation system to improve rotor angle stability. The proposed controller uses energy function to predict desired flux for the generator to achieve improved first swing stability and enhanced system damping. The controller is designed through predicting the desired value of flux for the future step of the system and then obtaining appropriate supplementary control input for the excitation system. The simulations are performed on Single-Machine-Infinite-Bus system and the results verify the efficiency of the controller. The proposed method facilitates the excitation system with a feasible and reliable controller for severe disturbances.
Resumo:
Background Cancer-related malnutrition is associated with increased morbidity, poorer tolerance of treatment, decreased quality of life, increased hospital admissions, and increased health care costs (Isenring et al., 2013). This study’s aim was to determine whether a novel, automated screening system was a useful tool for nutrition screening when compared against a full nutrition assessment using the Patient-Generated Subjective Global Assessment (PG-SGA) tool. Methods A single site, observational, cross-sectional study was conducted in an outpatient oncology day care unit within a Queensland tertiary facility, with three hundred outpatients (51.7% male, mean age 58.6 ± 13.3 years). Eligibility criteria: ≥18 years, receiving anticancer treatment, able to provide written consent. Patients completed the Malnutrition Screening Tool (MST). Nutritional status was assessed using the PG-SGA. Data for the automated screening system was extracted from the pharmacy software program Charm. This included body mass index (BMI) and weight records dating back up to six months. Results The prevalence of malnutrition was 17%. Any weight loss over three to six weeks prior to the most recent weight record as identified by the automated screening system relative to malnutrition resulted in 56.52% sensitivity, 35.43% specificity, 13.68% positive predictive value, 81.82% negative predictive value. MST score 2 or greater was a stronger predictor of nutritional risk relative to PG-SGA classified malnutrition (70.59% sensitivity, 69.48% specificity, 32.14% positive predictive value, 92.02% negative predictive value). Conclusions Both the automated screening system and the MST fell short of the accepted professional standard for sensitivity (80%) or specificity (60%) when compared to the PG-SGA. However, although the MST remains a better predictor of malnutrition in this setting, uptake of this tool in the Oncology Day Care Unit remains challenging.
Resumo:
The IEC 61850 family of standards for substation communication systems were released in the early 2000s, and include IEC 61850-8-1 and IEC 61850-9-2 that enable Ethernet to be used for process-level connections between transmission substation switchyards and control rooms. This paper presents an investigation of process bus protection performance, as the in-service behavior of multi-function process buses is largely unknown. An experimental approach was adopted that used a Real Time Digital Simulator and 'live' substation automation devices. The effect of sampling synchronization error and network traffic on transformer differential protection performance was assessed and compared to conventional hard-wired connections. Ethernet was used for all sampled value measurements, circuit breaker tripping, transformer tap-changer position reports and Precision Time Protocol synchronization of sampled value merging unit sampling. Test results showed that the protection relay under investigation operated correctly with process bus network traffic approaching 100% capacity. The protection system was not adversely affected by synchronizing errors significantly larger than the standards permit, suggesting these requirements may be overly conservative. This 'closed loop' approach, using substation automation hardware, validated the operation of protection relays under extreme conditions. Digital connections using a single shared Ethernet network outperformed conventional hard-wired solutions.
Resumo:
The term fashion system describes inter-relationships between production and consumption illustrating how the production of fashion is a collective activity. For instance, Yuniya Kawamura notes systems for the production of fashion differ around the globe and are subject to constant change, and Jennifer Craik draws attention to an ‘array of competing and intermeshing systems cutting across western and non-western cultures. In China, Shanghai’s nascent fashion system seeks to emulate the Eurocentric system of Fashion Weeks and industry support groups. It promises emergent designers a platform for global competition, yet there are tensions from within. Interaction with a fashion system inevitably means becoming validated or legitimised. Legitimisation in turn depends upon gatekeepers who make aesthetic judgments about the status, quality and cultural value of a designers work. Notwithstanding the proliferation of fashion media, in Shanghai a new gatekeeper has arrived, seeking to filter authenticity from artifice, offering truth in a fashion market saturated with fakery and the hollowness of foreign consumptive practice, and providing a place of sanctuary for Chinese fashion design. Thus this paper discusses how new agencies are allowing designers in Shanghai greater control over their brand image while creating novel opportunities for promotion and sales. It explores why designers choose this new model and provides new knowledge of the curation of fashion by these gatekeepers.
Resumo:
Background Despite the emerging use of treadmills integrated with pressure platforms as outcome tools in both clinical and research settings, published evidence regarding the measurement properties of these new systems is limited. This study evaluated the within– and between–day repeatability of spatial, temporal and vertical ground reaction forces measured by a treadmill system instrumented with a capacitance–based pressure platform. Methods Thirty three healthy adults (mean age, 21.5 ± 2.8 years; height, 168.4 ± 9.9 cm; and mass, 67.8 ± 18.6 kg), walked barefoot on a treadmill system (FDM–THM–S, Zebris Medical GmbH) on three separate occasions. For each testing session, participants set their preferred pace but were blinded to treadmill speed. Spatial (foot rotation, step width, stride and step length), temporal (stride and step times, duration of stance, swing and single and double support) and peak vertical ground reaction force variables were collected over a 30–second capture period, equating to an average of 52 ± 5 steps of steady–state walking. Testing was repeated one week following the initial trial and again, for a third time, 20 minutes later. Repeated measures ANOVAs within a generalized linear modelling framework were used to assess between–session differences in gait parameters. Agreement between gait parameters measured within the same day (session 2 and 3) and between days (session 1 and 2; 1 and 3) were evaluated using the 95% repeatability coefficient. Results There were statistically significant differences in the majority (14/16) of temporal, spatial and kinetic gait parameters over the three test sessions (P < .01). The minimum change that could be detected with 95% confidence ranged between 3% and 17% for temporal parameters, 14% and 33% for spatial parameters, and 4% and 20% for kinetic parameters between days. Within–day repeatability was similar to that observed between days. Temporal and kinetic gait parameters were typically more consistent than spatial parameters. The 95% repeatability coefficient for vertical force peaks ranged between ± 53 and ± 63 N. Conclusions The limits of agreement in spatial parameters and ground reaction forces for the treadmill system encompass previously reported changes with neuromuscular pathology and footwear interventions. These findings provide clinicians and researchers with an indication of the repeatability and sensitivity of the Zebris treadmill system to detect changes in common spatiotemporal gait parameters and vertical ground reaction forces.
Resumo:
Whole-image descriptors such as GIST have been used successfully for persistent place recognition when combined with temporal filtering or sequential filtering techniques. However, whole-image descriptor localization systems often apply a heuristic rather than a probabilistic approach to place recognition, requiring substantial environmental-specific tuning prior to deployment. In this paper we present a novel online solution that uses statistical approaches to calculate place recognition likelihoods for whole-image descriptors, without requiring either environmental tuning or pre-training. Using a real world benchmark dataset, we show that this method creates distributions appropriate to a specific environment in an online manner. Our method performs comparably to FAB-MAP in raw place recognition performance, and integrates into a state of the art probabilistic mapping system to provide superior performance to whole-image methods that are not based on true probability distributions. The method provides a principled means for combining the powerful change-invariant properties of whole-image descriptors with probabilistic back-end mapping systems without the need for prior training or system tuning.
Resumo:
In this paper we present a novel place recognition algorithm inspired by recent discoveries in human visual neuroscience. The algorithm combines intolerant but fast low resolution whole image matching with highly tolerant, sub-image patch matching processes. The approach does not require prior training and works on single images (although we use a cohort normalization score to exploit temporal frame information), alleviating the need for either a velocity signal or image sequence, differentiating it from current state of the art methods. We demonstrate the algorithm on the challenging Alderley sunny day – rainy night dataset, which has only been previously solved by integrating over 320 frame long image sequences. The system is able to achieve 21.24% recall at 100% precision, matching drastically different day and night-time images of places while successfully rejecting match hypotheses between highly aliased images of different places. The results provide a new benchmark for single image, condition-invariant place recognition.
Resumo:
A security system based on the recognition of the iris of human eyes using the wavelet transform is presented. The zero-crossings of the wavelet transform are used to extract the unique features obtained from the grey-level profiles of the iris. The recognition process is performed in two stages. The first stage consists of building a one-dimensional representation of the grey-level profiles of the iris, followed by obtaining the wavelet transform zerocrossings of the resulting representation. The second stage is the matching procedure for iris recognition. The proposed approach uses only a few selected intermediate resolution levels for matching, thus making it computationally efficient as well as less sensitive to noise and quantisation errors. A normalisation process is implemented to compensate for size variations due to the possible changes in the camera-to-face distance. The technique has been tested on real images in both noise-free and noisy conditions. The technique is being investigated for real-time implementation, as a stand-alone system, for access control to high-security areas.