966 resultados para Proximal Point Algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liu, Yonghuai. Automatic 3d free form shape matching using the graduated assignment algorithm. Pattern Recognition, vol. 38, no. 10, pp. 1615-1631, 2005.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plakhov, A.Y.; Cruz, P., (2004) 'A stochastic approximation algorithm with step size adaptation', Journal of Mathematical Science 120(1) pp.964-973 RAE2008

Relevância:

30.00% 30.00%

Publicador:

Resumo:

© 2005-2012 IEEE.Within industrial automation systems, three-dimensional (3-D) vision provides very useful feedback information in autonomous operation of various manufacturing equipment (e.g., industrial robots, material handling devices, assembly systems, and machine tools). The hardware performance in contemporary 3-D scanning devices is suitable for online utilization. However, the bottleneck is the lack of real-time algorithms for recognition of geometric primitives (e.g., planes and natural quadrics) from a scanned point cloud. One of the most important and the most frequent geometric primitive in various engineering tasks is plane. In this paper, we propose a new fast one-pass algorithm for recognition (segmentation and fitting) of planar segments from a point cloud. To effectively segment planar regions, we exploit the orthonormality of certain wavelets to polynomial function, as well as their sensitivity to abrupt changes. After segmentation of planar regions, we estimate the parameters of corresponding planes using standard fitting procedures. For point cloud structuring, a z-buffer algorithm with mesh triangles representation in barycentric coordinates is employed. The proposed recognition method is tested and experimentally validated in several real-world case studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Histopathology is the clinical standard for tissue diagnosis. However, histopathology has several limitations including that it requires tissue processing, which can take 30 minutes or more, and requires a highly trained pathologist to diagnose the tissue. Additionally, the diagnosis is qualitative, and the lack of quantitation leads to possible observer-specific diagnosis. Taken together, it is difficult to diagnose tissue at the point of care using histopathology.

Several clinical situations could benefit from more rapid and automated histological processing, which could reduce the time and the number of steps required between obtaining a fresh tissue specimen and rendering a diagnosis. For example, there is need for rapid detection of residual cancer on the surface of tumor resection specimens during excisional surgeries, which is known as intraoperative tumor margin assessment. Additionally, rapid assessment of biopsy specimens at the point-of-care could enable clinicians to confirm that a suspicious lesion is successfully sampled, thus preventing an unnecessary repeat biopsy procedure. Rapid and low cost histological processing could also be potentially useful in settings lacking the human resources and equipment necessary to perform standard histologic assessment. Lastly, automated interpretation of tissue samples could potentially reduce inter-observer error, particularly in the diagnosis of borderline lesions.

To address these needs, high quality microscopic images of the tissue must be obtained in rapid timeframes, in order for a pathologic assessment to be useful for guiding the intervention. Optical microscopy is a powerful technique to obtain high-resolution images of tissue morphology in real-time at the point of care, without the need for tissue processing. In particular, a number of groups have combined fluorescence microscopy with vital fluorescent stains to visualize micro-anatomical features of thick (i.e. unsectioned or unprocessed) tissue. However, robust methods for segmentation and quantitative analysis of heterogeneous images are essential to enable automated diagnosis. Thus, the goal of this work was to obtain high resolution imaging of tissue morphology through employing fluorescence microscopy and vital fluorescent stains and to develop a quantitative strategy to segment and quantify tissue features in heterogeneous images, such as nuclei and the surrounding stroma, which will enable automated diagnosis of thick tissues.

To achieve these goals, three specific aims were proposed. The first aim was to develop an image processing method that can differentiate nuclei from background tissue heterogeneity and enable automated diagnosis of thick tissue at the point of care. A computational technique called sparse component analysis (SCA) was adapted to isolate features of interest, such as nuclei, from the background. SCA has been used previously in the image processing community for image compression, enhancement, and restoration, but has never been applied to separate distinct tissue types in a heterogeneous image. In combination with a high resolution fluorescence microendoscope (HRME) and a contrast agent acriflavine, the utility of this technique was demonstrated through imaging preclinical sarcoma tumor margins. Acriflavine localizes to the nuclei of cells where it reversibly associates with RNA and DNA. Additionally, acriflavine shows some affinity for collagen and muscle. SCA was adapted to isolate acriflavine positive features or APFs (which correspond to RNA and DNA) from background tissue heterogeneity. The circle transform (CT) was applied to the SCA output to quantify the size and density of overlapping APFs. The sensitivity of the SCA+CT approach to variations in APF size, density and background heterogeneity was demonstrated through simulations. Specifically, SCA+CT achieved the lowest errors for higher contrast ratios and larger APF sizes. When applied to tissue images of excised sarcoma margins, SCA+CT correctly isolated APFs and showed consistently increased density in tumor and tumor + muscle images compared to images containing muscle. Next, variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was further tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. The results indicate that SCA+CT can accurately delineate APFs in heterogeneous tissue, which is essential to enable automated and rapid surveillance of tissue pathology.

Two primary challenges were identified in the work in aim 1. First, while SCA can be used to isolate features, such as APFs, from heterogeneous images, its performance is limited by the contrast between APFs and the background. Second, while it is feasible to create mosaics by scanning a sarcoma tumor bed in a mouse, which is on the order of 3-7 mm in any one dimension, it is not feasible to evaluate an entire human surgical margin. Thus, improvements to the microscopic imaging system were made to (1) improve image contrast through rejecting out-of-focus background fluorescence and to (2) increase the field of view (FOV) while maintaining the sub-cellular resolution needed for delineation of nuclei. To address these challenges, a technique called structured illumination microscopy (SIM) was employed in which the entire FOV is illuminated with a defined spatial pattern rather than scanning a focal spot, such as in confocal microscopy.

Thus, the second aim was to improve image contrast and increase the FOV through employing wide-field, non-contact structured illumination microscopy and optimize the segmentation algorithm for new imaging modality. Both image contrast and FOV were increased through the development of a wide-field fluorescence SIM system. Clear improvement in image contrast was seen in structured illumination images compared to uniform illumination images. Additionally, the FOV is over 13X larger than the fluorescence microendoscope used in aim 1. Initial segmentation results of SIM images revealed that SCA is unable to segment large numbers of APFs in the tumor images. Because the FOV of the SIM system is over 13X larger than the FOV of the fluorescence microendoscope, dense collections of APFs commonly seen in tumor images could no longer be sparsely represented, and the fundamental sparsity assumption associated with SCA was no longer met. Thus, an algorithm called maximally stable extremal regions (MSER) was investigated as an alternative approach for APF segmentation in SIM images. MSER was able to accurately segment large numbers of APFs in SIM images of tumor tissue. In addition to optimizing MSER for SIM image segmentation, an optimal frequency of the illumination pattern used in SIM was carefully selected because the image signal to noise ratio (SNR) is dependent on the grid frequency. A grid frequency of 31.7 mm-1 led to the highest SNR and lowest percent error associated with MSER segmentation.

Once MSER was optimized for SIM image segmentation and the optimal grid frequency was selected, a quantitative model was developed to diagnose mouse sarcoma tumor margins that were imaged ex vivo with SIM. Tumor margins were stained with acridine orange (AO) in aim 2 because AO was found to stain the sarcoma tissue more brightly than acriflavine. Both acriflavine and AO are intravital dyes, which have been shown to stain nuclei, skeletal muscle, and collagenous stroma. A tissue-type classification model was developed to differentiate localized regions (75x75 µm) of tumor from skeletal muscle and adipose tissue based on the MSER segmentation output. Specifically, a logistic regression model was used to classify each localized region. The logistic regression model yielded an output in terms of probability (0-100%) that tumor was located within each 75x75 µm region. The model performance was tested using a receiver operator characteristic (ROC) curve analysis that revealed 77% sensitivity and 81% specificity. For margin classification, the whole margin image was divided into localized regions and this tissue-type classification model was applied. In a subset of 6 margins (3 negative, 3 positive), it was shown that with a tumor probability threshold of 50%, 8% of all regions from negative margins exceeded this threshold, while over 17% of all regions exceeded the threshold in the positive margins. Thus, 8% of regions in negative margins were considered false positives. These false positive regions are likely due to the high density of APFs present in normal tissues, which clearly demonstrates a challenge in implementing this automatic algorithm based on AO staining alone.

Thus, the third aim was to improve the specificity of the diagnostic model through leveraging other sources of contrast. Modifications were made to the SIM system to enable fluorescence imaging at a variety of wavelengths. Specifically, the SIM system was modified to enabling imaging of red fluorescent protein (RFP) expressing sarcomas, which were used to delineate the location of tumor cells within each image. Initial analysis of AO stained panels confirmed that there was room for improvement in tumor detection, particularly in regards to false positive regions that were negative for RFP. One approach for improving the specificity of the diagnostic model was to investigate using a fluorophore that was more specific to staining tumor. Specifically, tetracycline was selected because it appeared to specifically stain freshly excised tumor tissue in a matter of minutes, and was non-toxic and stable in solution. Results indicated that tetracycline staining has promise for increasing the specificity of tumor detection in SIM images of a preclinical sarcoma model and further investigation is warranted.

In conclusion, this work presents the development of a combination of tools that is capable of automated segmentation and quantification of micro-anatomical images of thick tissue. When compared to the fluorescence microendoscope, wide-field multispectral fluorescence SIM imaging provided improved image contrast, a larger FOV with comparable resolution, and the ability to image a variety of fluorophores. MSER was an appropriate and rapid approach to segment dense collections of APFs from wide-field SIM images. Variables that reflect the morphology of the tissue, such as the density, size, and shape of nuclei and nucleoli, can be used to automatically diagnose SIM images. The clinical utility of SIM imaging and MSER segmentation to detect microscopic residual disease has been demonstrated by imaging excised preclinical sarcoma margins. Ultimately, this work demonstrates that fluorescence imaging of tissue micro-anatomy combined with a specialized algorithm for delineation and quantification of features is a means for rapid, non-destructive and automated detection of microscopic disease, which could improve cancer management in a variety of clinical scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As well as range, the AltiKa altimeter provides estimates of wave height, Hs and normalized backscatter, s0, that need to be assessed prior to statistics based on them being included in climate databases. An analysis of crossovers with the Jason-2 altimeter shows AltiKa Hs values to be biased high by only »0.05m, with a standard deviation (s.d.) of »0.1m for seven-point averages. AltiKa’s s 0 values are 2.5–3 dB less than those from Jason-2, with a s.d. of »0.3 dB, with these relatively large mismatches to be expected as AltiKa measures a different part of the spectrum of sea surface roughness. A new wind speed algorithm is developed through matchinghistogram of s0 values to that for Jason-2 wind speeds. The algorithm is robust to the use of short durations of data, with a consistency at roughly the 0.1 m/s level. Incorporation of Hs as a secondary input reduces the assessed error at crossovers from 0.82 m/s to 0.71 m/s. A comparison across all altimeter frequencies used to date demonstrates that the lowest wind speeds preferentially develop the shortest scales of roughness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dense deployment of wireless local area network (WLAN) access points (APs) is an important part of the next generation Wi-Fi and standardization (802.11ax) efforts are underway. Increasing demand for WLAN connectivity motivates such dense deployments, especially in geographical areas with large numbers of users, such as stadiums, large enterprises, multi-tenant buildings, and urban cities. Although densification of WLAN APs guarantees coverage, it is susceptible to increased interference and uncoordinated association of stations (STAs) to APs, which degrade network throughput. Therefore, to improve network throughput, algorithms are proposed in this thesis to optimally coordinate AP associations in the presence of interference. In essence, coordination of APs in dense WLANs (DWLANs) is achieved through coordination of STAs' associations with APs. While existing approaches suggest tuning of APs' beacon powers or using transmit power control (TPC) for association control, here, the signal-to-interference-plus-noise ratio (SINRs) of STAs and the clear channel assessment (CCA) threshold of the 802.11 MAC protocol are employed. The proposed algorithms in this thesis enhance throughput and minimize coverage holes inherent in cell breathing and TPC techniques by not altering the transmit powers of APs, which determine cell coverage. Besides uncoordinated AP associations, unnecessary frequent transmission deferment is envisaged as another problem in DWLANs due to the clear channel assessment aspect of the carrier sensing multiple access collision avoidance (CSMA/CA) scheme in 802.11 standards and the short spatial reuse distance between co-channel APs. To address this problem in addition to AP association coordination, an algorithm is proposed for CCA threshold adjustment in each AP cell, such that CCA threshold used in one cell mitigates transmission deferment in neighboring cells. Performance evaluation reveals that the proposed association optimization algorithms achieve significant gain in throughput when compared with the default strongest signal first (SSF) association scheme in the current 802.11 standard. Also, further gain in throughput is observed when the CCA threshold adjustment is combined with the optimized association. Results show that when STA-AP association is optimized and CCA threshold is adjusted in each cell, throughput improves. Finally, transmission delay and the number of packet re-transmissions due to collision and contention significantly decrease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional reconstruction from volumetric medical images (e.g. CT, MRI) is a well-established technology used in patient-specific modelling. However, there are many cases where only 2D (planar) images may be available, e.g. if radiation dose must be limited or if retrospective data is being used from periods when 3D data was not available. This study aims to address such cases by proposing an automated method to create 3D surface models from planar radiographs. The method consists of (i) contour extraction from the radiograph using an Active Contour (Snake) algorithm, (ii) selection of a closest matching 3D model from a library of generic models, and (iii) warping the selected generic model to improve correlation with the extracted contour.

This method proved to be fully automated, rapid and robust on a given set of radiographs. Measured mean surface distance error values were low when comparing models reconstructed from matching pairs of CT scans and planar X-rays (2.57–3.74 mm) and within ranges of similar studies. Benefits of the method are that it requires a single radiographic image to perform the surface reconstruction task and it is fully automated. Mechanical simulations of loaded bone with different levels of reconstruction accuracy showed that an error in predicted strain fields grows proportionally to the error level in geometric precision. In conclusion, models generated by the proposed technique are deemed acceptable to perform realistic patient-specific simulations when 3D data sources are unavailable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the DONUTS autoguiding algorithm, designed to fix stellar positions at the sub-pixel level for high-cadence time-series photometry, and also capable of autoguiding on defocused stars. DONUTS was designed to calculate guide corrections from a series of science images and recentre telescope pointing between each exposure. The algorithm has the unique ability of calculating guide corrections from undersampled to heavily defocused point spread functions. We present the case for why such an algorithm is important for high precision photometry and give our results from off and on-sky testing. We discuss the limitations of DONUTS and the facilities where it soon will be deployed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Significant recent progress has shown ear recognition to be a viable biometric. Good recognition rates have been demonstrated under controlled conditions, using manual registration or with specialised equipment. This paper describes a new technique which improves the robustness of ear registration and recognition, addressing issues of pose variation, background clutter and occlusion. By treating the ear as a planar surface and creating a homography transform using SIFT feature matches, ears can be registered accurately. The feature matches reduce the gallery size and enable a precise ranking using a simple 2D distance algorithm. When applied to the XM2VTS database it gives results comparable to PCA with manual registration. Further analysis on more challenging datasets demonstrates the technique to be robust to background clutter, viewing angles up to +/- 13 degrees and with over 20% occlusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research presents a fast algorithm for projected support vector machines (PSVM) by selecting a basis vector set (BVS) for the kernel-induced feature space, the training points are projected onto the subspace spanned by the selected BVS. A standard linear support vector machine (SVM) is then produced in the subspace with the projected training points. As the dimension of the subspace is determined by the size of the selected basis vector set, the size of the produced SVM expansion can be specified. A two-stage algorithm is derived which selects and refines the basis vector set achieving a locally optimal model. The model expansion coefficients and bias are updated recursively for increase and decrease in the basis set and support vector set. The condition for a point to be classed as outside the current basis vector and selected as a new basis vector is derived and embedded in the recursive procedure. This guarantees the linear independence of the produced basis set. The proposed algorithm is tested and compared with an existing sparse primal SVM (SpSVM) and a standard SVM (LibSVM) on seven public benchmark classification problems. Our new algorithm is designed for use in the application area of human activity recognition using smart devices and embedded sensors where their sometimes limited memory and processing resources must be exploited to the full and the more robust and accurate the classification the more satisfied the user. Experimental results demonstrate the effectiveness and efficiency of the proposed algorithm. This work builds upon a previously published algorithm specifically created for activity recognition within mobile applications for the EU Haptimap project [1]. The algorithms detailed in this paper are more memory and resource efficient making them suitable for use with bigger data sets and more easily trained SVMs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ever-growing energy consumption in mobile networks stimulated by the expected growth in data tra ffic has provided the impetus for mobile operators to refocus network design, planning and deployment towards reducing the cost per bit, whilst at the same time providing a signifi cant step towards reducing their operational expenditure. As a step towards incorporating cost-eff ective mobile system, 3GPP LTE-Advanced has adopted the coordinated multi-point (CoMP) transmission technique due to its ability to mitigate and manage inter-cell interference (ICI). Using CoMP the cell average and cell edge throughput are boosted. However, there is room for reducing energy consumption further by exploiting the inherent exibility of dynamic resource allocation protocols. To this end packet scheduler plays the central role in determining the overall performance of the 3GPP longterm evolution (LTE) based on packet-switching operation and provide a potential research playground for optimizing energy consumption in future networks. In this thesis we investigate the baseline performance for down link CoMP using traditional scheduling approaches, and subsequently go beyond and propose novel energy e fficient scheduling (EES) strategies that can achieve power-e fficient transmission to the UEs whilst enabling both system energy effi ciency gain and fairness improvement. However, ICI can still be prominent when multiple nodes use common resources with di fferent power levels inside the cell, as in the so called heterogeneous networks (Het- Net) environment. HetNets are comprised of two or more tiers of cells. The rst, or higher tier, is a traditional deployment of cell sites, often referred to in this context as macrocells. The lower tiers are termed small cells, and can appear as microcell, picocells or femtocells. The HetNet has attracted signiffi cant interest by key manufacturers as one of the enablers for high speed data at low cost. Research until now has revealed several key hurdles that must be overcome before HetNets can achieve their full potential: bottlenecks in the backhaul must be alleviated, as well as their seamless interworking with CoMP. In this thesis we explore exactly the latter hurdle, and present innovative ideas on advancing CoMP to work in synergy with HetNet deployment, complemented by a novel resource allocation policy for HetNet tighter interference management. As system level simulator has been used to analyze the proposed algorithm/protocols, and results have concluded that up to 20% energy gain can be observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent integrated circuit technologies have opened the possibility to design parallel architectures with hundreds of cores on a single chip. The design space of these parallel architectures is huge with many architectural options. Exploring the design space gets even more difficult if, beyond performance and area, we also consider extra metrics like performance and area efficiency, where the designer tries to design the architecture with the best performance per chip area and the best sustainable performance. In this paper we present an algorithm-oriented approach to design a many-core architecture. Instead of doing the design space exploration of the many core architecture based on the experimental execution results of a particular benchmark of algorithms, our approach is to make a formal analysis of the algorithms considering the main architectural aspects and to determine how each particular architectural aspect is related to the performance of the architecture when running an algorithm or set of algorithms. The architectural aspects considered include the number of cores, the local memory available in each core, the communication bandwidth between the many-core architecture and the external memory and the memory hierarchy. To exemplify the approach we did a theoretical analysis of a dense matrix multiplication algorithm and determined an equation that relates the number of execution cycles with the architectural parameters. Based on this equation a many-core architecture has been designed. The results obtained indicate that a 100 mm(2) integrated circuit design of the proposed architecture, using a 65 nm technology, is able to achieve 464 GFLOPs (double precision floating-point) for a memory bandwidth of 16 GB/s. This corresponds to a performance efficiency of 71 %. Considering a 45 nm technology, a 100 mm(2) chip attains 833 GFLOPs which corresponds to 84 % of peak performance These figures are better than those obtained by previous many-core architectures, except for the area efficiency which is limited by the lower memory bandwidth considered. The results achieved are also better than those of previous state-of-the-art many-cores architectures designed specifically to achieve high performance for matrix multiplication.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An adaptive antenna array combines the signal of each element, using some constraints to produce the radiation pattern of the antenna, while maximizing the performance of the system. Direction of arrival (DOA) algorithms are applied to determine the directions of impinging signals, whereas beamforming techniques are employed to determine the appropriate weights for the array elements, to create the desired pattern. In this paper, a detailed analysis of both categories of algorithms is made, when a planar antenna array is used. Several simulation results show that it is possible to point an antenna array in a desired direction based on the DOA estimation and on the beamforming algorithms. A comparison of the performance in terms of runtime and accuracy of the used algorithms is made. These characteristics are dependent on the SNR of the incoming signal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction : La néphrotoxicité est une complication majeure de la gentamicine, qui est largement utilisée dans le traitement des infections bactériennes, en particulier celles provoquées par des bactéries à Gram-négatif. La gentamicine induit l'apoptose tubulaire, mais les mécanismes moléculaires impliqués demeurent mal compris. Dans l’étude présente, nous avons examiné le rôle des espèces réactives de l'oxygène (ROS) , des proteins Bax, Bmf et Caspase-12 (Csp-12) dans le mécanisme d’action de la gentamicine sur l'apoptose des tubules proximaux rénaux (RPT) et les dommages rénaux induits par ce médicament chez la souris. Méthode: Des souris adultes (âgées 18-19 semaines) mâles non-Tg et des souris transgéniques (CAT-Tg) qui surexpriment la catalase spécifiquement dans leurs cellules des RPT ont été traitées par injections intra-péritonéales de gentamicine (20 mg/kg/jour) pour 5 jours consécutifs, puis euthanasiés. Les reins ont été examinés et analysés par histologie, immunohistochimie pour presansance de la stress oxidative, expression des proteins Bax, Bmf et Csp-12 et essai TUNEL pour étudier de l’apoptose . Nous avons aussi examiné l'effet de la gentamicine sur génération des ROS et l’apoptose dans les cellules RPTC immortalisées de rat (IRPTC) in vitro. Résultats: In vivo, chez les souris non-Tg, la gentamicine induit une tubulopathie et l'apoptose des RPT , stimule la production de ROS et induit une augmentation de Bax et Bmf détectée par immunohistochimie et augmont activité du caspase-12. Ces changements sont atténués chez les souris Cat-Tg. In vitro, la gentamicine induit l’apoptos des cellulles. Le co-traitement avec la catalase normalise ces effets dans les IRPTC. Conclusion : Ces données démontrent que l'apoptose des RPTC induite par la gentamicine s’effectue, au moins en partie, par l'intermédiaire de la génération des ROS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A program is presented for the construction of relativistic symmetry-adapted molecular basis functions. It is applicable to 36 finite double point groups. The algorithm, based on the projection operator method, automatically generates linearly independent basis sets. Time reversal invariance is included in the program, leading to additional selection rules in the non-relativistic limit.