71 resultados para Computer Vision for Robotics and Automation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

RATIONALE: Interstitial lung disease (ILD) in patients with systemic sclerosis (SSc) is associated with increased morbidity and mortality. Gastroesophageal reflux (GER) is considered a contributing factor in the pathogenesis of ILD. OBJECTIVES: To characterize GER (acid and nonacid) in patients with SSc with and without ILD. METHODS: Patients with SSc underwent pulmonary high-resolution computer tomography (HRCT) scan and 24-hour impedance-pH monitoring off-proton pump inhibitor therapy. The presence of pulmonary fibrosis was assessed using validated HRCT-scores. Reflux monitoring parameters included number of acid and nonacid reflux episodes, proximal migration of the refluxate, and distal esophageal acid exposure. Unless otherwise specified, data are presented as median (25th-75th percentile). MEASUREMENTS AND MAIN RESULTS: Forty consecutive patients with SSc (35 female; mean age, 53 yr; range, 24-71; 15 patients with diffuse and 25 with limited SSc) were investigated; 18 (45%) patients with SSc had pulmonary fibrosis (HRCT score >or= 7). Patients with SSc with ILD had higher (P < 0.01) esophageal acid exposure (10.3 [7.5-15] vs. 5.2 [1.5-11]), higher (P < 0.01) number of acid (41 [31-58] vs. 19 [10-23]) and nonacid (25 [20-35] vs. 17 [11-19]) reflux episodes, and higher (P < 0.01) number of reflux episodes reaching the proximal esophagus (42.5 [31-54] vs. 15 [8-22]) compared with patients with SSc with normal HRCT scores. Pulmonary fibrosis scores (HRCT score) correlated well with the number of reflux episodes in the distal (r(2) = 0.637) and proximal (r(2) = 0.644) esophagus. CONCLUSIONS: Patients with SSc with ILD have more severe reflux (i.e., more reflux episodes and more reflux reaching the proximal esophagus). Whether or not the development of ILD in patients with SSc can be prevented by reflux-reducing treatments needs to be investigated.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prevention and treatment of osteoporosis rely on understanding of the micromechanical behaviour of bone and its influence on fracture toughness and cell-mediated adaptation processes. Postyield properties may be assessed by nonlinear finite element simulations of nanoindentation using elastoplastic and damage models. This computational study aims at determining the influence of yield surface shape and damage on the depth-dependent response of bone to nanoindentation using spherical and conical tips. Yield surface shape and damage were shown to have a major impact on the indentation curves. Their influence on indentation modulus, hardness, their ratio as well as the elastic-to-total work ratio is well described by multilinear regressions for both tip shapes. For conical tips, indentation depth was not statistically significant (p<0.0001). For spherical tips, damage was not a significant parameter (p<0.0001). The gained knowledge can be used for developing an inverse method for identification of postelastic properties of bone from nanoindentation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer tomography (CT)-based finite element (FE) models of vertebral bodies assess fracture load in vitro better than dual energy X-ray absorptiometry, but boundary conditions affect stress distribution under the endplates that may influence ultimate load and damage localisation under post-yield strains. Therefore, HRpQCT-based homogenised FE models of 12 vertebral bodies were subjected to axial compression with two distinct boundary conditions: embedding in polymethylmethalcrylate (PMMA) and bonding to a healthy intervertebral disc (IVD) with distinct hyperelastic properties for nucleus and annulus. Bone volume fraction and fabric assessed from HRpQCT data were used to determine the elastic, plastic and damage behaviour of bone. Ultimate forces obtained with PMMA were 22% higher than with IVD but correlated highly (R2 = 0.99). At ultimate force, distinct fractions of damage were computed in the endplates (PMMA: 6%, IVD: 70%), cortex and trabecular sub-regions, which confirms previous observations that in contrast to PMMA embedding, failure initiated underneath the nuclei in healthy IVDs. In conclusion, axial loading of vertebral bodies via PMMA embedding versus healthy IVD overestimates ultimate load and leads to distinct damage localisation and failure pattern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a novel vision for further enhanced Internet of Things services. Based on a variety of data (such as location data, ontology-backed search queries, in- and outdoor conditions) the Prometheus framework is intended to support users with helpful recommendations and information preceding a search for context-aware data. Adapted from artificial intelligence concepts, Prometheus proposes user-readjusted answers on umpteen conditions. A number of potential Prometheus framework applications are illustrated. Added value and possible future studies are discussed in the conclusion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To enable buyers to be better informed before purchasing, products and services can be virtually experienced on the internet. Research into virtual experience (VE) and the related construct of telepresence (TP) as means of online marketing has made great progress in recent years. However, there is still disagreement in the literature concerning the exact understanding of these terms. In this study, the two terms are analyzed by means of a systematically executed literature review, differentiated from one another, and their understandings explained. This study is to our knowledge the first to compare the concepts of VE and TP in a systematic way. The analysis shows that TP is regarded as the feeling of presence conveyed by a communication medium. VE, on the other hand, is to be defined as an active state of a consumer through the use of computer-based presentation formats, and constituting a subtype of TP. These findings are intended to help VE and TP become more uniformly understood and make it easier to compare the results of future studies. Finally, from the literature review, it is possible to derive focal points for research in future studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the Bag of Features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5,000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10,000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is great demand for easily-accessible, user-friendly dietary self-management applications. Yet accurate, fully-automatic estimation of nutritional intake using computer vision methods remains an open research problem. One key element of this problem is the volume estimation, which can be computed from 3D models obtained using multi-view geometry. The paper presents a computational system for volume estimation based on the processing of two meal images. A 3D model of the served meal is reconstructed using the acquired images and the volume is computed from the shape. The algorithm was tested on food models (dummy foods) with known volume and on real served food. Volume accuracy was in the order of 90 %, while the total execution time was below 15 seconds per image pair. The proposed system combines simple and computational affordable methods for 3D reconstruction, remained stable throughout the experiments, operates in near real time, and places minimum constraints on users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the most promising applications for the restoration of small or moderately sized focal articular lesions is mosaicplasty (MP). Although recurrent hemarthrosis is a rare complication after MP, recently, various strategies have been designed to find an effective filling material to prevent postoperative bleeding from the donor site. The porous biodegradable polymer Polyactive (PA; a polyethylene glycol terephthalate - polybutylene terephthalate copolymer) represents a promising solution in this respect. A histological evaluation of the longterm PA-filled donor sites obtained from 10 experimental horses was performed. In this study, attention was primarily focused on the bone tissue developed in the plug. A computer-assisted image analysis and quantitative polarized light microscopic measurements of decalcified, longitudinally sectioned, dimethylmethylene blue (DMMB)- and picrosirius red (PS) stained sections revealed that the coverage area of the bone trabecules in the PA-filled donor tunnels was substantially (25%) enlarged compared to the neighboring cancellous bone. For this quantification, identical ROIs (regions of interest) were used and compared. The birefringence retardation values were also measured with a polarized light microscope using monochromatic light. Identical retardation values could be recorded from the bone trabeculae developed in the PA and in the neighboring bone, which indicates that the collagen orientation pattern does not differ significantly among these bone trabecules. Based on our new data, we speculate that PA promotes bone formation, and some of the currently identified degradation products of PA may enhance osteo-conduction and osteoinduction inside the donor canal.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, a method that synchronizes two video sequences is proposed. Unlike previous methods, which require the existence of correspondences between features tracked in the two sequences, and/or that the cameras are static or jointly moving, the proposed approach does not impose any of these constraints. It works when the cameras move independently, even if different features are tracked in the two sequences. The assumptions underlying the proposed strategy are that the intrinsic parameters of the cameras are known and that two rigid objects, with independent motions on the scene, are visible in both sequences. The relative motion between these objects is used as clue for the synchronization. The extrinsic parameters of the cameras are assumed to be unknown. A new synchronization algorithm for static or jointly moving cameras that see (possibly) different parts of a common rigidly moving object is also proposed. Proof-of-concept experiments that illustrate the performance of these methods are presented, as well as a comparison with a state-of-the-art approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we study the problem of blind deconvolution. Our analysis is based on the algorithm of Chan and Wong [2] which popularized the use of sparse gradient priors via total variation. We use this algorithm because many methods in the literature are essentially adaptations of this framework. Such algorithm is an iterative alternating energy minimization where at each step either the sharp image or the blur function are reconstructed. Recent work of Levin et al. [14] showed that any algorithm that tries to minimize that same energy would fail, as the desired solution has a higher energy than the no-blur solution, where the sharp image is the blurry input and the blur is a Dirac delta. However, experimentally one can observe that Chan and Wong's algorithm converges to the desired solution even when initialized with the no-blur one. We provide both analysis and experiments to resolve this paradoxical conundrum. We find that both claims are right. The key to understanding how this is possible lies in the details of Chan and Wong's implementation and in how seemingly harmless choices result in dramatic effects. Our analysis reveals that the delayed scaling (normalization) in the iterative step of the blur kernel is fundamental to the convergence of the algorithm. This then results in a procedure that eludes the no-blur solution, despite it being a global minimum of the original energy. We introduce an adaptation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the state of the art.