933 resultados para alignment-free methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-rigid face alignment is a very important task in a large range of applications but the existing tracking based non-rigid face alignment methods are either inaccurate or requiring person-specific model. This dissertation has developed simultaneous alignment algorithms that overcome these constraints and provide alignment with high accuracy, efficiency, robustness to varying image condition, and requirement of only generic model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cell-to-cell adhesion is an important aspect of malignant spreading that is often observed in images from the experimental cell biology literature. Since cell-to-cell adhesion plays an important role in controlling the movement of individual malignant cells, it is likely that cell-to-cell adhesion also influences the spatial spreading of populations of such cells. Therefore, it is important for us to develop biologically realistic simulation tools that can mimic the key features of such collective spreading processes to improve our understanding of how cell-to-cell adhesion influences the spreading of cell populations. Previous models of collective cell spreading with adhesion have used lattice-based random walk frameworks which may lead to unrealistic results, since the agents in the random walk simulations always move across an artificial underlying lattice structure. This is particularly problematic in high-density regions where it is clear that agents in the random walk align along the underlying lattice, whereas no such regular alignment is ever observed experimentally. To address these limitations, we present a lattice-free model of collective cell migration that explicitly incorporates crowding and adhesion. We derive a partial differential equation description of the discrete process and show that averaged simulation results compare very well with numerical solutions of the partial differential equation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whole-image descriptors such as GIST have been used successfully for persistent place recognition when combined with temporal filtering or sequential filtering techniques. However, whole-image descriptor localization systems often apply a heuristic rather than a probabilistic approach to place recognition, requiring substantial environmental-specific tuning prior to deployment. In this paper we present a novel online solution that uses statistical approaches to calculate place recognition likelihoods for whole-image descriptors, without requiring either environmental tuning or pre-training. Using a real world benchmark dataset, we show that this method creates distributions appropriate to a specific environment in an online manner. Our method performs comparably to FAB-MAP in raw place recognition performance, and integrates into a state of the art probabilistic mapping system to provide superior performance to whole-image methods that are not based on true probability distributions. The method provides a principled means for combining the powerful change-invariant properties of whole-image descriptors with probabilistic back-end mapping systems without the need for prior training or system tuning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whole image descriptors have recently been shown to be remarkably robust to perceptual change especially compared to local features. However, whole-image-based localization systems typically rely on heuristic methods for determining appropriate matching thresholds in a particular environment. These environment-specific tuning requirements and the lack of a meaningful interpretation of these arbitrary thresholds limits the general applicability of these systems. In this paper we present a Bayesian model of probability for whole-image descriptors that can be seamlessly integrated into localization systems designed for probabilistic visual input. We demonstrate this method using CAT-Graph, an appearance-based visual localization system originally designed for a FAB-MAP-style probabilistic input. We show that using whole-image descriptors as visual input extends CAT-Graph’s functionality to environments that experience a greater amount of perceptual change. We also present a method of estimating whole-image probability models in an online manner, removing the need for a prior training phase. We show that this online, automated training method can perform comparably to pre-trained, manually tuned local descriptor methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel framework for the unsupervised alignment of an ensemble of temporal sequences. This approach draws inspiration from the axiom that an ensemble of temporal signals stemming from the same source/class should have lower rank when "aligned" rather than "misaligned". Our approach shares similarities with recent state of the art methods for unsupervised images ensemble alignment (e.g. RASL) which breaks the problem into a set of image alignment problems (which have well known solutions i.e. the Lucas-Kanade algorithm). Similarly, we propose a strategy for decomposing the problem of temporal ensemble alignment into a similar set of independent sequence problems which we claim can be solved reliably through Dynamic Time Warping (DTW). We demonstrate the utility of our method using the Cohn-Kanade+ dataset, to align expression onset across multiple sequences, which allows us to automate the rapid discovery of event annotations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In hyper competition, firms that are agile: sensing and responding better to customer requirements tend to be more successful and achieve supernormal profits. In spite of the widely accepted importance of customer agility, research is limited on this construct. The limited research also has predominantly focussed on the firm’s perspective of agility. However, we propose that the customers are better positioned to determine how well a firm is responding to their requirements (aka a firm’s customer agility). Taking the customers’ stand point, we address the issue of sense and respond alignment in two perspectives-matching and mediating. Based on data collected from customers in a field study, we tested hypothesis pertaining to the two methods of alignment using polynomial regression and response surface methodology. The results provide a good explanation for the role of both forms of alignment on customer satisfaction. Implication for research and practice are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose. To establish a simple and rapid analytical method, based on direct insertion/electron ionization-mass spectrometry (DI/EI-MS), for measuring free cholesterol in tears from humans and rabbits. Methods. A stable-isotope dilution protocol employing DI/EI-MS in selected ion monitoring mode was developed and validated. It was used to quantify the free cholesterol content in human and rabbit tear extracts. Tears were collected from adult humans (n = 15) and rabbits (n = 10) and lipids extracted. Results. Screening, full-scan (m/z 40-600) DI/EI-MS analysis of crude tear extracts showed that diagnostic ions located in the mass range m/z 350 to 400 were those derived from free cholesterol, with no contribution from cholesterol esters. DI/EI-MS data acquired using selected ion monitoring (SIM) were analyzed for the abundance ratios of diagnostic ions with their stable isotope-labeled analogues arising from the D6-cholesterol internal standard. Standard curves of good linearity were produced and an on-probe limit of detection of 3 ng (at 3:1 signal to noise) and limit of quantification of 8 ng (at 10:1 signal to noise). The concentration of free cholesterol in human tears was 15 ± 6 μg/g, which was higher than in rabbit tears (10 ± 5 μg/g). Conclusions. A stable-isotope dilution DI/EI-SIM method for free cholesterol quantification without prior chromatographic separation was established. Using this method demonstrated that humans have higher free cholesterol levels in their tears than rabbits. This is in agreement with previous reports. This paper provides a rapid and reliable method to measure free cholesterol in small-volume clinical samples. © 2013 The Association for Research in Vision and Ophthalmology, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective To develop and evaluate machine learning techniques that identify limb fractures and other abnormalities (e.g. dislocations) from radiology reports. Materials and Methods 99 free-text reports of limb radiology examinations were acquired from an Australian public hospital. Two clinicians were employed to identify fractures and abnormalities from the reports; a third senior clinician resolved disagreements. These assessors found that, of the 99 reports, 48 referred to fractures or abnormalities of limb structures. Automated methods were then used to extract features from these reports that could be useful for their automatic classification. The Naive Bayes classification algorithm and two implementations of the support vector machine algorithm were formally evaluated using cross-fold validation over the 99 reports. Result Results show that the Naive Bayes classifier accurately identifies fractures and other abnormalities from the radiology reports. These results were achieved when extracting stemmed token bigram and negation features, as well as using these features in combination with SNOMED CT concepts related to abnormalities and disorders. The latter feature has not been used in previous works that attempted classifying free-text radiology reports. Discussion Automated classification methods have proven effective at identifying fractures and other abnormalities from radiology reports (F-Measure up to 92.31%). Key to the success of these techniques are features such as stemmed token bigrams, negations, and SNOMED CT concepts associated with morphologic abnormalities and disorders. Conclusion This investigation shows early promising results and future work will further validate and strengthen the proposed approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Accelerometers have become one of the most common methods of measuring physical activity (PA). Thus, validity of accelerometer data reduction approaches remains an important research area. Yet, few studies directly compare data reduction approaches and other PA measures in free-living samples. Objective To compare PA estimates provided by 3 accelerometer data reduction approaches, steps, and 2 self-reported estimates: Crouter's 2-regression model, Crouter's refined 2-regression model, the weighted cut-point method adopted in the National Health and Nutrition Examination Survey (NHANES; 2003-2004 and 2005-2006 cycles), steps, IPAQ, and 7-day PA recall. Methods A worksite sample (N = 87) completed online-surveys and wore ActiGraph GT1M accelerometers and pedometers (SW-200) during waking hours for 7 consecutive days. Daily time spent in sedentary, light, moderate, and vigorous intensity activity and percentage of participants meeting PA recommendations were calculated and compared. Results Crouter's 2-regression (161.8 +/- 52.3 minutes/day) and refined 2-regression (137.6 +/- 40.3 minutes/day) models provided significantly higher estimates of moderate and vigorous PA and proportions of those meeting PA recommendations (91% and 92%, respectively) as compared with the NHANES weighted cut-point method (39.5 +/- 20.2 minutes/day, 18%). Differences between other measures were also significant. Conclusions When comparing 3 accelerometer cut-point methods, steps, and self-report measures, estimates of PA participation vary substantially.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Natural product provenance is important in the food, beverage and pharmaceutical industries, for consumer confidence and with health implications. Raman spectroscopy has powerful molecular fingerprint abilities. Surface Enhanced Raman Spectroscopy’s (SERS) sharp peaks allow distinction between minimally different molecules, so it should be suitable for this purpose. Methods Naturally caffeinated beverages with Guarana extract, coffee and Red Bull energy drink as a synthetic caffeinated beverage for comparison (20 µL ea.) were reacted 1:1 with Gold nanoparticles functionalised with anti-caffeine antibody (ab15221) (10 minutes), air dried and analysed in a micro-Raman instrument. The spectral data was processed using Principle Component Analysis (PCA). Results The PCA showed Guarana sourced caffeine varied significantly from synthetic caffeine (Red Bull) on component 1 (containing 76.4% of the variance in the data). See figure 1. The coffee containing beverages, and in particular Robert Timms (instant coffee) were very similar on component 1, but the barista espresso showed minor variance on component 1. Both coffee sourced caffeine samples varied with red Bull on component 2, (20% of variance). ************************************************************ Figure 1 PCA comparing a naturally caffeinated beverage containing Guarana with coffee. ************************************************************ Discussion PCA is an unsupervised multivariate statistical method that determines patterns within data. Figure 1 shows Caffeine in Guarana is notably different to synthetic caffeine. Other researchers have revealed that caffeine in Guarana plants is complexed with tannins. Naturally sourced/ lightly processed caffeine (Monster Energy, Espresso) are more inherently different than synthetic (Red Bull) /highly processed (Robert Timms) caffeine, in figure 1, which is consistent with this finding and demonstrates this technique’s applicability. Guarana provenance is important because it is still largely hand produced and its demand is escalating with recognition of its benefits. This could be a powerful technique for Guarana provenance, and may extend to other industries where provenance / authentication are required, e.g. the wine or natural pharmaceuticals industries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A rapid reforming of natural honey exposed to reactive low-temperature Ar + H2 plasmas produced high-quality, ultra-thin vertical graphenes, without any metal catalyst or external heating. This transformation is only possible in the plasma and fails in similar thermal processes. The process is energy-efficient, environmentally benign, and is much cheaper than common synthesis methods based on purified hydrocarbon precursors. The graphenes retain the essential minerals of natural honey, feature reactive open edges and reliable gas- and bio-sensing performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction This investigation aimed to assess the consistency and accuracy of radiation therapists (RTs) performing cone beam computed tomography (CBCT) alignment to fiducial markers (FMs) (CBCTFM) and the soft tissue prostate (CBCTST). Methods Six patients receiving prostate radiation therapy underwent daily CBCTs. Manual alignment of CBCTFM and CBCTST was performed by three RTs. Inter-observer agreement was assessed using a modified Bland–Altman analysis for each alignment method. Clinically acceptable 95% limits of agreement with the mean (LoAmean) were defined as ±2.0 mm for CBCTFM and ±3.0 mm for CBCTST. Differences between CBCTST alignment and the observer-averaged CBCTFM (AvCBCTFM) alignment were analysed. Clinically acceptable 95% LoA were defined as ±3.0 mm for the comparison of CBCTST and AvCBCTFM. Results CBCTFM and CBCTST alignments were performed for 185 images. The CBCTFM 95% LoAmean were within ±2.0 mm in all planes. CBCTST 95% LoAmean were within ±3.0 mm in all planes. Comparison of CBCTST with AvCBCTFM resulted in 95% LoA of −4.9 to 2.6, −1.6 to 2.5 and −4.7 to 1.9 mm in the superior–inferior, left–right and anterior–posterior planes, respectively. Conclusions Significant differences were found between soft tissue alignment and the predicted FM position. FMs are useful in reducing inter-observer variability compared with soft tissue alignment. Consideration needs to be given to margin design when using soft tissue matching due to increased inter-observer variability. This study highlights some of the complexities of soft tissue guidance for prostate radiation therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose To establish whether the use of a passive or active technique of planning target volume (PTV) definition and treatment methods for non-small cell lung cancer (NSCLC) deliver the most effective results. This literature review assesses the advantages and disadvantages in recent studies of each, while assessing the validity of the two approaches for planning and treatment. Methods A systematic review of literature focusing on the planning and treatment of radiation therapy to NSCLC tumours. Different approaches which have been published in recent articles are subjected to critical appraisal in order to determine their relative efficacy. Results Free-breathing (FB) is the optimal method to perform planning scans for patients and departments, as it involves no significant increase in cost, workload or education. Maximum intensity projection (MIP) is the fastest form of delineation, however it is noted to be less accurate than the ten-phase overlap approach for computed tomography (CT). Although gating has proven to reduce margins and facilitate sparing of organs at risk, treatment times can be longer and planning time can be as much as 15 times higher for intensity modulated radiation therapy (IMRT). This raises issues with patient comfort and stabilisation, impacting on the chance of geometric miss. Stereotactic treatments can take up to 3 hours to treat, along with increases in planning and treatment, as well as the additional hardware, software and training required. Conclusion Four-dimensional computed tomography (4DCT) is superior to 3DCT, with the passive FB approach for PTV delineation and treatment optimal. Departments should use a combination of MIP with visual confirmation ensuring coverage for stage 1 disease. Stages 2-3 should be delineated using ten-phases overlaid. Stereotactic and gated treatments for early stage disease should be used accordingly; FB-IMRT is optimal for latter stage disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The one-step preparation of highly anisotropic polymer semiconductor thin films directly from solution is demonstrated. The conjugated polymer poly(3-hexylthiophene) (P3HT) as well as P3HT:fullerene bulk-heterojunction blends can be spin-coated from a mixture of the crystallizable solvent 1,3,5-trichlorobenzene (TCB) and a second carrier solvent such as chlorobenzene. Solidification is initiated by growth of macroscopic TCB spherulites followed by epitaxial crystallization of P3HT on TCB crystals. Subsequent sublimation of TCB leaves behind a replica of the original TCB spherulites. Thus, highly ordered thin films are obtained, which feature square-centimeter-sized domains that are composed of one spherulite-like structure each. A combination of optical microscopy and polarized photoluminescence spectroscopy reveals radial alignment of the polymer backbone in case of P3HT, whereas P3HT:fullerene blends display a tangential orientation with respect to the center of spherulite-like structures. Moreover, grazing-incidence wide-angle X-ray scattering reveals an increased relative degree of crystallinity and predominantly flat-on conformation of P3HT crystallites in the blend. The use of other processing methods such as dip-coating is also feasible and offers uniaxial orientation of the macromolecule. Finally, the applicability of this method to a variety of other semi-crystalline conjugated polymer systems is established. Those include other poly(3-alkylthiophene)s, two polyfluorenes, the low band-gap polymer PCPDTBT, a diketopyrrolopyrrole (DPP) small molecule as well as a number of polymer:fullerene and polymer:polymer blends. Macroscopic spherulite-like structures of the conjugated polymer poly(3-hexylthiophene) (P3HT) grow directly during spin-coating. This is achieved by processing P3HT or P3HT:fullerene bulk heterojunction blends from a mixture of the crystallizable solvent 1,3,5-trichlorobenzene and a second carrier solvent such as chlorobenzene. Epitaxial growth of the polymer on solidified solvent crystals gives rise to circular-symmetric, spherulite-like structures that feature a high degree of anisotropy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plant food materials have a very high demand in the consumer market and therefore, improved food products and efficient processing techniques are concurrently being researched in food engineering. In this context, numerical modelling and simulation techniques have a very high potential to reveal fundamentals of the underlying mechanisms involved. However, numerical modelling of plant food materials during drying becomes quite challenging, mainly due to the complexity of the multiphase microstructure of the material, which undergoes excessive deformations during drying. In this regard, conventional grid-based modelling techniques have limited applicability due to their inflexible grid-based fundamental limitations. As a result, meshfree methods have recently been developed which offer a more adaptable approach to problem domains of this nature, due to their fundamental grid-free advantages. In this work, a recently developed meshfree based two-dimensional plant tissue model is used for a comparative study of microscale morphological changes of several food materials during drying. The model involves Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) to represent fluid and solid phases of the cellular structure. Simulation are conducted on apple, potato, carrot and grape tissues and the results are qualitatively and quantitatively compared and related with experimental findings obtained from the literature. The study revealed that cellular deformations are highly sensitive to cell dimensions, cell wall physical and mechanical properties, middle lamella properties and turgor pressure. In particular, the meshfree model is well capable of simulating critically dried tissues at lower moisture content and turgor pressure, which lead to cell wall wrinkling. The findings further highlighted the potential applicability of the meshfree approach to model large deformations of the plant tissue microstructure during drying, providing a distinct advantage over the state of the art grid-based approaches.