925 resultados para Label-free techniques
Resumo:
Whole image descriptors have recently been shown to be remarkably robust to perceptual change especially compared to local features. However, whole-image-based localization systems typically rely on heuristic methods for determining appropriate matching thresholds in a particular environment. These environment-specific tuning requirements and the lack of a meaningful interpretation of these arbitrary thresholds limits the general applicability of these systems. In this paper we present a Bayesian model of probability for whole-image descriptors that can be seamlessly integrated into localization systems designed for probabilistic visual input. We demonstrate this method using CAT-Graph, an appearance-based visual localization system originally designed for a FAB-MAP-style probabilistic input. We show that using whole-image descriptors as visual input extends CAT-Graph’s functionality to environments that experience a greater amount of perceptual change. We also present a method of estimating whole-image probability models in an online manner, removing the need for a prior training phase. We show that this online, automated training method can perform comparably to pre-trained, manually tuned local descriptor methods.
Resumo:
Visual localization in outdoor environments is often hampered by the natural variation in appearance caused by such things as weather phenomena, diurnal fluctuations in lighting, and seasonal changes. Such changes are global across an environment and, in the case of global light changes and seasonal variation, the change in appearance occurs in a regular, cyclic manner. Visual localization could be greatly improved if it were possible to predict the appearance of a particular location at a particular time, based on the appearance of the location in the past and knowledge of the nature of appearance change over time. In this paper, we investigate whether global appearance changes in an environment can be learned sufficiently to improve visual localization performance. We use time of day as a test case, and generate transformations between morning and afternoon using sample images from a training set. We demonstrate the learned transformation can be generalized from training data and show the resulting visual localization on a test set is improved relative to raw image comparison. The improvement in localization remains when the area is revisited several weeks later.
Resumo:
In order to increase the accuracy of patient positioning for complex radiotherapy treatments various 3D imaging techniques have been developed. MegaVoltage Cone Beam CT (MVCBCT) can utilise existing hardware to implement a 3D imaging modality to aid patient positioning. MVCBCT has been investigated using an unmodified Elekta Precise linac and 15 iView amorphous silicon electronic portal imaging device (EPID). Two methods of delivery and acquisition have been investigated for imaging an anthropomorphic head phantom and quality assurance phantom. Phantom projections were successfully acquired and CT datasets reconstructed using both acquisition methods. Bone, tissue and air were 20 clearly resolvable in both phantoms even with low dose (22 MU) scans. The feasibility of MegaVoltage Cone beam CT was investigated using a standard linac, amorphous silicon EPID and a combination of a free open source reconstruction toolkit as well as custom in-house software written in Matlab. The resultant image quality has 25 been assessed and presented. Although bone, tissue and air were resolvable 2 in all scans, artifacts are present and scan doses are increased when compared with standard portal imaging. The feasibility of MVCBCT with unmodified Elekta Precise linac and EPID has been considered as well as the identification of possible areas for future development in artifact correction techniques to 30 further improve image quality.
Resumo:
Purpose. To establish a simple and rapid analytical method, based on direct insertion/electron ionization-mass spectrometry (DI/EI-MS), for measuring free cholesterol in tears from humans and rabbits. Methods. A stable-isotope dilution protocol employing DI/EI-MS in selected ion monitoring mode was developed and validated. It was used to quantify the free cholesterol content in human and rabbit tear extracts. Tears were collected from adult humans (n = 15) and rabbits (n = 10) and lipids extracted. Results. Screening, full-scan (m/z 40-600) DI/EI-MS analysis of crude tear extracts showed that diagnostic ions located in the mass range m/z 350 to 400 were those derived from free cholesterol, with no contribution from cholesterol esters. DI/EI-MS data acquired using selected ion monitoring (SIM) were analyzed for the abundance ratios of diagnostic ions with their stable isotope-labeled analogues arising from the D6-cholesterol internal standard. Standard curves of good linearity were produced and an on-probe limit of detection of 3 ng (at 3:1 signal to noise) and limit of quantification of 8 ng (at 10:1 signal to noise). The concentration of free cholesterol in human tears was 15 ± 6 μg/g, which was higher than in rabbit tears (10 ± 5 μg/g). Conclusions. A stable-isotope dilution DI/EI-SIM method for free cholesterol quantification without prior chromatographic separation was established. Using this method demonstrated that humans have higher free cholesterol levels in their tears than rabbits. This is in agreement with previous reports. This paper provides a rapid and reliable method to measure free cholesterol in small-volume clinical samples. © 2013 The Association for Research in Vision and Ophthalmology, Inc.
Jacobian-free Newton-Krylov methods with GPU acceleration for computing nonlinear ship wave patterns
Resumo:
The nonlinear problem of steady free-surface flow past a submerged source is considered as a case study for three-dimensional ship wave problems. Of particular interest is the distinctive wedge-shaped wave pattern that forms on the surface of the fluid. By reformulating the governing equations with a standard boundary-integral method, we derive a system of nonlinear algebraic equations that enforce a singular integro-differential equation at each midpoint on a two-dimensional mesh. Our contribution is to solve the system of equations with a Jacobian-free Newton-Krylov method together with a banded preconditioner that is carefully constructed with entries taken from the Jacobian of the linearised problem. Further, we are able to utilise graphics processing unit acceleration to significantly increase the grid refinement and decrease the run-time of our solutions in comparison to schemes that are presently employed in the literature. Our approach provides opportunities to explore the nonlinear features of three-dimensional ship wave patterns, such as the shape of steep waves close to their limiting configuration, in a manner that has been possible in the two-dimensional analogue for some time.
Resumo:
In this paper we introduce and discuss the nature of free-play in the context of three open-ended interactive art installation works. We observe the interaction work of situated free-play of the participants in these environments and, building on precedent work, devise a set of sensitising terms derived both from the literature and from what we observe from participants interacting there. These sensitising terms act as guides and are designed to be used by those who experience, evaluate or report on open-ended interactive art. That is, we propose these terms as a common-ground language to be used by participants communicating while in the art work to describe their experience, by researchers in the various stages of research process (observation, coding activity, analysis, reporting, and publication), and by inter-disciplinary researchers working across the fields of HCI and art. This work builds a foundation for understanding the relationship between free-play, open-ended environments, and interactive installations and contributes sensitising terms useful for the HCI community for discussion and analysis of open-ended interactive art works.
Resumo:
Polarisation diversity is a technique to improve the quality of mobile communications, but its reliability is suboptimal because it depends on the mobile channel and the antenna orientations at both ends of the mobile link. A method to optimise the reliability is established by minimising the dependency on antenna orientations. While the mobile base station can have fixed antenna orientation, the mobile terminal is typically a handheld device with random orientations. This means orientation invariance needs to be established at the receiver in the downlink, and at the transmitter in the uplink. This research presents separate solutions for both cases, and is based on the transmission of an elliptically polarised signal synthesised from the channel statistics. Complete receiver orientation invariance is achieved in the downlink. Effects of the transmitter orientation are minimised in the uplink.
Resumo:
The article introduces a novel platform for conducting controlled and risk-free driving and traveling behavior studies, called Cyber-Physical System Simulator (CPSS). The key features of CPSS are: (1) simulation of multiuser immersive driving in a threedimensional (3D) virtual environment; (2) integration of traffic and communication simulators with human driving based on dedicated middleware; and (3) accessibility of multiuser driving simulator on popular software and hardware platforms. This combination of features allows us to easily collect large-scale data on interesting phenomena regarding the interaction between multiple user drivers, which is not possible with current single-user driving simulators. The core original contribution of this article is threefold: (1) we introduce a multiuser driving simulator based on DiVE, our original massively multiuser networked 3D virtual environment; (2) we introduce OpenV2X, a middleware for simulating vehicle-to-vehicle and vehicle to infrastructure communication; and (3) we present two experiments based on our CPSS platform. The first experiment investigates the “rubbernecking” phenomenon, where a platoon of four user drivers experiences an accident in the oncoming direction of traffic. Second, we report on a pilot study about the effectiveness of a Cooperative Intelligent Transport Systems advisory system.
Resumo:
Previous studies have shown that the human lens contains glycerophospholipids with ether linkages. These lipids differ from conventional glycerophospholipids in that the sn-1 substituent is attached to the glycerol backbone via an 1-O-alkyl or an 1-O-alk-1'-enyl ether rather than an ester bond. The present investigation employed a combination of collision-induced dissociation (CID) and ozone-induced dissociation (OzID) to unambiguously distinguish such 1-O-alkyl and 1-O-alk-1'-enyl ethers. Using these methodologies the human lens was found to contain several abundant 1-O-alkyl glycerophos-phoethanolamines, including GPEtn(16:0e/9Z-18:1), GPEtn(11Z-18:1e/9Z-18:1), and GPEtn(18:0e/9Z-18:1), as well as a related series of unusual 1-O-alkyl glycerophosphoserines, including GPSer(16:0e/9Z-18:1), GPSer(11Z-18:1e/9Z-18:1), GPSer(18:0e/9Z-18:1) that to our knowledge have not previously been observed in human tissue. Isomeric 1-O-alk-1'-enyl ethers were absent or in low abundance. Examination of the double bond position within the phospholipids using OzID revealed that several positional isomers were present, including sites of unsaturation at the n-9, n-7, and even n-5 positions. Tandem CID/OzID experiments revealed a preference for double bonds in the n-7 position of 1-O-ether linked chains, while n-9 double bonds predominated in the ester-linked fatty acids [e.g., GPEtn(11Z-18:1e/9Z-18:1) and GPSer(11Z-18:1e/9Z-18:1)]. Different combinations of these double bond positional isomers within chains at the sn-1 and sn-2 positions point to a remarkable molecular diversity of ether-lipids within the human lens.
Resumo:
E-mail spam has remained a scourge and menacing nuisance for users, internet and network service operators and providers, in spite of the anti-spam techniques available; and spammers are relentlessly circumventing these anti-spam techniques embedded or installed in form of software products on both client and server sides of both fixed and mobile devices to their advantage. This continuous evasion degrades the capabilities of these anti-spam techniques as none of them provides a comprehensive reliable solution to the problem posed by spam and spammers. Major problem for instance arises when these anti-spam techniques misjudge or misclassify legitimate emails as spam (false positive); or fail to deliver or block spam on the SMTP server (false negative); and the spam passes-on to the receiver, and yet this server from where it originates does not notice or even have an auto alert service to indicate that the spam it was designed to prevent has slipped and moved on to the receiver’s SMTP server; and the receiver’s SMTP server still fail to stop the spam from reaching user’s device and with no auto alert mechanism to inform itself of this inability; thus causing a staggering cost in loss of time, effort and finance. This paper takes a comparative literature overview of some of these anti-spam techniques, especially the filtering technological endorsements designed to prevent spam, their merits and demerits to entrench their capability enhancements, as well as evaluative analytical recommendations that will be subject to further research.
Resumo:
In this paper, we propose a new multi-class steganalysis for binary image. The proposed method can identify the type of steganographic technique used by examining on the given binary image. In addition, our proposed method is also capable of differentiating an image with hidden message from the one without hidden message. In order to do that, we will extract some features from the binary image. The feature extraction method used is a combination of the method extended from our previous work and some new methods proposed in this paper. Based on the extracted feature sets, we construct our multi-class steganalysis from the SVM classifier. We also present the empirical works to demonstrate that the proposed method can effectively identify five different types of steganography.
Resumo:
Objective Evaluate the effectiveness and robustness of Anonym, a tool for de-identifying free-text health records based on conditional random fields classifiers informed by linguistic and lexical features, as well as features extracted by pattern matching techniques. De-identification of personal health information in electronic health records is essential for the sharing and secondary usage of clinical data. De-identification tools that adapt to different sources of clinical data are attractive as they would require minimal intervention to guarantee high effectiveness. Methods and Materials The effectiveness and robustness of Anonym are evaluated across multiple datasets, including the widely adopted Integrating Biology and the Bedside (i2b2) dataset, used for evaluation in a de-identification challenge. The datasets used here vary in type of health records, source of data, and their quality, with one of the datasets containing optical character recognition errors. Results Anonym identifies and removes up to 96.6% of personal health identifiers (recall) with a precision of up to 98.2% on the i2b2 dataset, outperforming the best system proposed in the i2b2 challenge. The effectiveness of Anonym across datasets is found to depend on the amount of information available for training. Conclusion Findings show that Anonym compares to the best approach from the 2006 i2b2 shared task. It is easy to retrain Anonym with new datasets; if retrained, the system is robust to variations of training size, data type and quality in presence of sufficient training data.
Resumo:
Background Cancer monitoring and prevention relies on the critical aspect of timely notification of cancer cases. However, the abstraction and classification of cancer from the free-text of pathology reports and other relevant documents, such as death certificates, exist as complex and time-consuming activities. Aims In this paper, approaches for the automatic detection of notifiable cancer cases as the cause of death from free-text death certificates supplied to Cancer Registries are investigated. Method A number of machine learning classifiers were studied. Features were extracted using natural language techniques and the Medtex toolkit. The numerous features encompassed stemmed words, bi-grams, and concepts from the SNOMED CT medical terminology. The baseline consisted of a keyword spotter using keywords extracted from the long description of ICD-10 cancer related codes. Results Death certificates with notifiable cancer listed as the cause of death can be effectively identified with the methods studied in this paper. A Support Vector Machine (SVM) classifier achieved best performance with an overall F-measure of 0.9866 when evaluated on a set of 5,000 free-text death certificates using the token stem feature set. The SNOMED CT concept plus token stem feature set reached the lowest variance (0.0032) and false negative rate (0.0297) while achieving an F-measure of 0.9864. The SVM classifier accounts for the first 18 of the top 40 evaluated runs, and entails the most robust classifier with a variance of 0.001141, half the variance of the other classifiers. Conclusion The selection of features significantly produced the most influences on the performance of the classifiers, although the type of classifier employed also affects performance. In contrast, the feature weighting schema created a negligible effect on performance. Specifically, it is found that stemmed tokens with or without SNOMED CT concepts create the most effective feature when combined with an SVM classifier.
Resumo:
PURPOSE. Phospholipids are a major component of lens fiber cells and influence the activity of membrane proteins. Previous investigations of fatty acid uptake by the lens are limited. The purpose of the present study was thus to determine whether exogenous fatty acids could be taken up by the rat lens and incorporated into molecular phospholipids. METHODS. Lenses were incubated with fluorescently labeled palmitic acid and then analyzed by confocal microscopy. Concurrently, lenses incubated with either fluorescently labeled palmitic acid or the more physiologically relevant (13)C(18)-oleic acid were sectioned into nuclear and cortical regions and analyzed by highly sensitive and structurally selective electrospray ionization tandem mass spectrometry techniques. RESULTS. The detection of fluorescently labeled palmitic acid, even after 16 hours of incubation, was limited to approximately the outer 25% to 30% of the rat lens. Mass spectrometry also revealed the presence of free (13)C(18)-oleic acid in the cortex but not the nucleus. No evidence could be found for incorporation of fluorescently labeled palmitic acid into phospholipids; however, a low level of (13)C(18)-oleic acid incorporation into phosphatidylethanolamine (PE), specifically PE (PE 16:0/(13)C(18) 18:1) was detected in the lens cortex after 16 hours. CONCLUSIONS. These data demonstrate that uptake of exogenous (e.g., dietary fatty acids) by the lens and their incorporation into phospholipids is minimal, most likely occurring only during de novo synthesis in the outermost region of the lens. This finding adds support to the hypothesis that once synthesized there is no active remodeling or turnover of fiber cell phospholipids.