986 resultados para Visual Feedback
Resumo:
Creació d¿un software de gestió de factures electròniques desenvolupat en aquesta plataforma tecnològica, amb indicació expressa d¿utilització de les eines VSTO (Visual Studio Tools for Office) en la seva última versió.
Resumo:
El projecte amb nom clau Visual Management Vinotec neix de la necessitat d'administrar i gestionar el catàleg de productes d'una empresa que es dedica a la distribució de licors espirituosos (conyacs i whiskies), generesos (Jerez), vins i caves. Es tracta d'una eina web destinada als professionals del món dels destil·lats i del món vinícola.
Resumo:
Aquest document desenvolupa la realització d'un
Resumo:
This report is a summary of the feedback from the public consultation process on the current Lifeline contract and future options.
Resumo:
Recent evidence has emerged that peroxisome proliferator-activated receptor alpha (PPARalpha), which is largely involved in lipid metabolism, can play an important role in connecting circadian biology and metabolism. In the present study, we investigated the mechanisms by which PPARalpha influences the pacemakers acting in the central clock located in the suprachiasmatic nucleus and in the peripheral oscillator of the liver. We demonstrate that PPARalpha plays a specific role in the peripheral circadian control because it is required to maintain the circadian rhythm of the master clock gene brain and muscle Arnt-like protein 1 (bmal1) in vivo. This regulation occurs via a direct binding of PPARalpha on a potential PPARalpha response element located in the bmal1 promoter. Reversely, BMAL1 is an upstream regulator of PPARalpha gene expression. We further demonstrate that fenofibrate induces circadian rhythm of clock gene expression in cell culture and up-regulates hepatic bmal1 in vivo. Together, these results provide evidence for an additional regulatory feedback loop involving BMAL1 and PPARalpha in peripheral clocks.
Resumo:
INTRODUCTION Genome-wide association studies of rheumatoid arthritis (RA) have identified an association of the disease with a 6q23 region devoid of genes. TNFAIP3, an RA candidate gene, flanks this region, and polymorphisms in both the TNFAIP3 gene and the intergenic region are associated with systemic lupus erythematosus. We hypothesized that there is a similar association with RA, including polymorphisms in TNFAIP3 and the intergenic region. METHODS To test this hypothesis, we selected tag-single nucleotide polymorphisms (SNPs) in both loci. They were analyzed in 1,651 patients with RA and 1,619 control individuals of Spanish ancestry. RESULTS Weak evidence of association was found both in the 6q23 intergenic region and in the TNFAIP3 locus. The rs582757 SNP and a common haplotype in the TNFAIP3 locus exhibited association with RA. In the intergenic region, two SNPs were associated, namely rs609438 and rs13207033. The latter was only associated in patients with anti-citrullinated peptide antibodies. Overall, statistical association was best explained by the interdependent contribution of SNPs from the two loci TNFAIP3 and the 6q23 intergenic region. CONCLUSIONS Our data are consistent with the hypothesis that several RA genetic factors exist in the 6q23 region, including polymorphisms in the TNFAIP3 gene, like that previously described for systemic lupus erythematosus.
Resumo:
We investigate whether dimensionality reduction using a latent generative model is beneficial for the task of weakly supervised scene classification. In detail, we are given a set of labeled images of scenes (for example, coast, forest, city, river, etc.), and our objective is to classify a new image into one of these categories. Our approach consists of first discovering latent ";topics"; using probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature here applied to a bag of visual words representation for each image, and subsequently, training a multiway classifier on the topic distribution vector for each image. We compare this approach to that of representing each image by a bag of visual words vector directly and training a multiway classifier on these vectors. To this end, we introduce a novel vocabulary using dense color SIFT descriptors and then investigate the classification performance under changes in the size of the visual vocabulary, the number of latent topics learned, and the type of discriminative classifier used (k-nearest neighbor or SVM). We achieve superior classification performance to recent publications that have used a bag of visual word representation, in all cases, using the authors' own data sets and testing protocols. We also investigate the gain in adding spatial information. We show applications to image retrieval with relevance feedback and to scene classification in videos
Resumo:
When underwater vehicles navigate close to the ocean floor, computer vision techniques can be applied to obtain motion estimates. A complete system to create visual mosaics of the seabed is described in this paper. Unfortunately, the accuracy of the constructed mosaic is difficult to evaluate. The use of a laboratory setup to obtain an accurate error measurement is proposed. The system consists on a robot arm carrying a downward looking camera. A pattern formed by a white background and a matrix of black dots uniformly distributed along the surveyed scene is used to find the exact image registration parameters. When the robot executes a trajectory (simulating the motion of a submersible), an image sequence is acquired by the camera. The estimated motion computed from the encoders of the robot is refined by detecting, to subpixel accuracy, the black dots of the image sequence, and computing the 2D projective transform which relates two consecutive images. The pattern is then substituted by a poster of the sea floor and the trajectory is executed again, acquiring the image sequence used to test the accuracy of the mosaicking system
Resumo:
Positioning a robot with respect to objects by using data provided by a camera is a well known technique called visual servoing. In order to perform a task, the object must exhibit visual features which can be extracted from different points of view. Then, visual servoing is object-dependent as it depends on the object appearance. Therefore, performing the positioning task is not possible in presence of nontextured objets or objets for which extracting visual features is too complex or too costly. This paper proposes a solution to tackle this limitation inherent to the current visual servoing techniques. Our proposal is based on the coded structured light approach as a reliable and fast way to solve the correspondence problem. In this case, a coded light pattern is projected providing robust visual features independently of the object appearance
Resumo:
Positioning a robot with respect to objects by using data provided by a camera is a well known technique called visual servoing. In order to perform a task, the object must exhibit visual features which can be extracted from different points of view. Then, visual servoing is object-dependent as it depends on the object appearance. Therefore, performing the positioning task is not possible in presence of non-textured objects or objects for which extracting visual features is too complex or too costly. This paper proposes a solution to tackle this limitation inherent to the current visual servoing techniques. Our proposal is based on the coded structured light approach as a reliable and fast way to solve the correspondence problem. In this case, a coded light pattern is projected providing robust visual features independently of the object appearance
Resumo:
This paper focuses on the problem of realizing a plane-to-plane virtual link between a camera attached to the end-effector of a robot and a planar object. In order to do the system independent to the object surface appearance, a structured light emitter is linked to the camera so that 4 laser pointers are projected onto the object. In a previous paper we showed that such a system has good performance and nice characteristics like partial decoupling near the desired state and robustness against misalignment of the emitter and the camera (J. Pages et al., 2004). However, no analytical results concerning the global asymptotic stability of the system were obtained due to the high complexity of the visual features utilized. In this work we present a better set of visual features which improves the properties of the features in (J. Pages et al., 2004) and for which it is possible to prove the global asymptotic stability
Resumo:
In this paper we face the problem of positioning a camera attached to the end-effector of a robotic manipulator so that it gets parallel to a planar object. Such problem has been treated for a long time in visual servoing. Our approach is based on linking to the camera several laser pointers so that its configuration is aimed to produce a suitable set of visual features. The aim of using structured light is not only for easing the image processing and to allow low-textured objects to be treated, but also for producing a control scheme with nice properties like decoupling, stability, well conditioning and good camera trajectory
Resumo:
Purpose:To functionally and morphologically characterize the retina and optic nerve after transplantation of Brain-derived neurotrophic factor (BDNF) and Glial-derived neurotrophic factor (GDNF) secreting mesenchymal stem cells (MSCs) into glaucomatous rat eyes. Methods:Chronic ocular hypertension (COH) was induced in Brown Norway rats. Lentiviral constructs were used to transduce rat MSCs to produce BDNF, GDNF, or green fluorescent protein (GFP). The fellow eyes served as internal controls. Two days following COH induction, eyes received intravitreal injections of transduced MSCs. Electroretinography was performed to assess retinal function. Tonometry was performed throughout the experiment to monitor IOP. 42 days after MSC transplantation, rats were euthanized and the eyes and optic nerves were prepared for analysis. Results:Increased expression and secretion of BDNF and GDNF from lentiviral-transduced MSCs was verified using ELISA, and a bioactivity assay. Ratio metric analysis (COH eye/ Internal control eye response) of the Max combined response A-Wave showed animals with BDNF-MSCs (23.35 ± 5.15%, p=0.021) and GDNF-MSCs (28.73 ± 3.61%, p=0.025) preserved significantly more visual function than GFP-MSC treated eyes MSCs (18.05 ± 5.51%). Animals receiving BDNF-MSCs also had significantly better B-wave (33.80 ± 7.19%) and flicker ERG responses (28.52 ± 10.43%) than GFP-MSC treated animals (14.06 ± 12.67%; 3.52 ± 0.07%, respectively). Animals receiving GDNF-MSC transplants tended to have better function than animals with GFP-MSC transplants, but were not statistically significant (p=0.057 and p=0.0639). Conclusions:Mesenchymal stem cells are an excellent source of cells for autologous transplantation for the treatment of neurodegenerative diseases. We have demonstrated that lentiviral- transduced MSCs can survive following transplantation and preserve visual function in glaucomatous eyes. These results suggest that MSCs may be an ideal cellular vehicle for delivery of specific neurotrophic factors to the retina.
Resumo:
PURPOSE: The aim of this work is to investigate the characteristics of eyes failing to maintain visual acuity (VA) receiving variable dosing ranibizumab for neovascular age-related macular degeneration (nAMD) after three initial loading doses. METHODS: A consecutive series of patients with nAMD, who, after three loading doses of intravitreal ranibizumab (0.5 mg each), were re-treated for fluid seen on optical coherence tomography. After exclusion of eyes with previous treatment, follow-up less than 12 months, or missed visits, 99 patients were included in the analysis. The influence of baseline characteristics, initial VA response, and central retinal thickness (CRT) fluctuations on the VA stability from month 3 to month 24 were analyzed using subgroups and multiple regression analyses. RESULTS: Mean follow-up duration was 21.3 months (range 12-40 months, 32 patients followed-up for ≥24 months). Secondary loss of VA (loss of five letters or more) after month 3 was seen in 30 patients (mean VA improvement from baseline +5.8 letters at month 3, mean loss from baseline -5.3 letters at month 12 and -9.7 at final visit up to month 24), while 69 patients maintained vision (mean gain +8.9 letters at month 3, +10.4 letters at month 12, and +12.8 letters at final visit up to month 24). Secondary loss of VA was associated with the presence of pigment epithelial detachment (PED) at baseline (p 0.01), but not with baseline fibrosis/atrophy/hemorrhage, CRT fluctuations, or initial VA response. Chart analysis revealed additional individual explanations for the secondary loss of VA, including retinal pigment epithelial tears, progressive fibrosis, and atrophy. CONCLUSIONS: Tissue damage due to degeneration of PED, retinal pigment epithelial tears, progressive fibrosis, progressive atrophy, or massive hemorrhage, appears to be relevant in causing secondary loss of VA despite vascular endothelial growth factor suppression. PED at baseline may represent a risk factor.
Resumo:
An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.