859 resultados para INTERACTIVE SEGMENTATION
Resumo:
The paper presents a study regarding babies' interactive processes with peers, which aimed to apprehend some of their qualitative aspects, considering babies' peculiarities. An empirical work was conducted with video recording scenes and interviews, regarding the "Babies' Adaptation to a Daycare Center" project, which followed up 21 babies (4-13 months) at a daycare center. Data analysis was based on the Network of Meanings perspective. Five episodes are here presented regarding three focal subjects and their peers. Analysis indicates the occurrence of interactions; among them it can be highlighted the role of the glance, presence of triadic relations (even among babies younger than nine months old), abbreviation of communicative resources and empathy. Moreover, despite verbal language absence at this age, it was verified meaning processes. Some practical-theoretical implications are pointed out as well.
Resumo:
Creating high-quality quad meshes from triangulated surfaces is a highly nontrivial task that necessitates consideration of various application specific metrics of quality. In our work, we follow the premise that automatic reconstruction techniques may not generate outputs meeting all the subjective quality expectations of the user. Instead, we put the user at the center of the process by providing a flexible, interactive approach to quadrangulation design. By combining scalar field topology and combinatorial connectivity techniques, we present a new framework, following a coarse to fine design philosophy, which allows for explicit control of the subjective quality criteria on the output quad mesh, at interactive rates. Our quadrangulation framework uses the new notion of Reeb atlas editing, to define with a small amount of interactions a coarse quadrangulation of the model, capturing the main features of the shape, with user prescribed extraordinary vertices and alignment. Fine grain tuning is easily achieved with the notion of connectivity texturing, which allows for additional extraordinary vertices specification and explicit feature alignment, to capture the high-frequency geometries. Experiments demonstrate the interactivity and flexibility of our approach, as well as its ability to generate quad meshes of arbitrary resolution with high-quality statistics, while meeting the user's own subjective requirements.
DNA-Interactive Properties of Crotamine, a Cell-Penetrating Polypeptide and a Potential Drug Carrier
Resumo:
Crotamine, a 42-residue polypeptide derived from the venom of the South American rattlesnake Crotalus durissus terrificus, has been shown to be a cell-penetrating protein that targets chromosomes, carries plasmid DNA into cells, and shows specificity for actively proliferating cells. Given this potential role as a nucleic acid-delivery vector, we have studied in detail the binding of crotamine to single- and double-stranded DNAs of different lengths and base compositions over a range of ionic conditions. Agarose gel electrophoresis and ultraviolet spectrophotometry analysis indicate that complexes of crotamine with long-chain DNAs readily aggregate and precipitate at low ionic strength. This aggregation, which may be important for cellular uptake of DNA, becomes less likely with shorter chain length. 25-mer oligonucleotides do not show any evidence of such aggregation, permitting the determination of affinities and size via fluorescence quenching experiments. The polypeptide binds non-cooperatively to DNA, covering about 5 nucleotide residues when it binds to single (ss) or (ds) double stranded molecules. The affinities of the protein for ss-vs. ds-DNA are comparable, and inversely proportional to salt levels. Analysis of the dependence of affinity on [NaCl] indicates that there are a maximum of,3 ionic interactions between the protein and DNA, with some of the binding affinity attributable to non-ionic interactions. Inspection of the three-dimensional structure of the protein suggests that residues 31 to 35, Arg-Trp-Arg-Trp-Lys, could serve as a potential DNA-binding site. A hexapeptide containing this sequence displayed a lower DNA binding affinity and salt dependence as compared to the full-length protein, likely indicative of a more suitable 3D structure and the presence of accessory binding sites in the native crotamine. Taken together, the data presented here describing crotamine-DNA interactions may lend support to the design of more effective nucleic acid drug delivery vehicles which take advantage of crotamine as a carrier with specificity for actively proliferating cells. Citation: Chen P-C, Hayashi MAF, Oliveira EB, Karpel RL (2012) DNA-Interactive Properties of Crotamine, a Cell-Penetrating Polypeptide and a Potential Drug Carrier. PLoS ONE 7(11): e48913. doi:10.1371/journal.pone.0048913
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
Insulin-like growth factor type 1 (IGF1) is a mediator of growth hormone (GH) action, and therefore, IGF1 is a candidate gene for recombinant human GH (rhGH) pharmacogenetics. Lower serum IGF1 levels were found in adults homozygous for 19 cytosine-adenosine (CA) repeats in the IGF1 promoter. The aim of this study was to evaluate the influence of (CA)n IGF1 polymorphism, alone or in combination with GH receptor (GHR)-exon 3 and -202 A/C insulin-like growth factor binding protein-3 (IGFBP3) polymorphisms, on the growth response to rhGH therapy in GH-deficient (GHD) patients. Eighty-four severe GHD patients were genotyped for (CA) n IGF1, -202 A/C IGFBP3 and GHR-exon 3 polymorphisms. Multiple linear regressions were performed to estimate the effect of each genotype, after adjustment for other influential factors. We assessed the influence of genotypes on the first year growth velocity (1st y GV) (n = 84) and adult height standard deviation score (SDS) adjusted for target-height SDS (AH-TH SDS) after rhGH therapy (n = 37). Homozygosity for the IGF1 19CA repeat allele was negatively correlated with 1st y GV (P = 0.03) and AH-TH SDS (P = 0.002) in multiple linear regression analysis. In conjunction with clinical factors, IGF1 and IGFBP3 genotypes explain 29% of the 1st y GV variability, whereas IGF1 and GHR polymorphisms explain 59% of final height-target-height SDS variability. We conclude that homozygosity for IGF1 (CA) 19 allele is associated with less favorable short-and long-term growth outcomes after rhGH treatment in patients with severe GHD. Furthermore, this polymorphism exhibits a non-additive interaction with -202 A/C IGFBP3 genotype on the 1st y GV and with GHR-exon 3 genotype on adult height. The Pharmacogenomics Journal (2012) 12, 439-445; doi:10.1038/tpj.2011.13; published online 5 April 2011
Resumo:
Context: There is great interindividual variability in the response to recombinant human (rh) GH therapy in patients with Turner syndrome (TS). Ascertaining genetic factors can improve the accuracy of growth response predictions. Objective: The objective of the study was to assess the individual and combined influence of GHR-exon 3 and -202 A/C IGFBP3 polymorphisms on the short-and long-term outcomes of rhGH therapy in patients with TS. Design and Patients: GHR-exon 3 and -202 A/C IGFBP3 genotyping (rs2854744) was correlated with height data of 112 patients with TS who remained prepubertal during the first year of rhGH therapy and 65 patients who reached adult height after 5 +/- 2.5 yr of rhGH treatment. Main Outcome Measures: First-year growth velocity and adult height were measured. Results: Patients carrying at least one GHR-d3 or -202 A-IGFBP3 allele presented higher mean first-year growth velocity and achieved taller adult heights than those homozygous for GHR-fl or -202 C-IGFBP3 alleles, respectively. The combined analysis of GHR-exon 3 and -202 A/C IGFBP3 genotypes showed a clear nonadditive epistatic influence on adult height of patients with TS treated with rhGH (GHR-exon 3 alone, R-2 = 0.27; -202 A/C IGFBP3, R-2 = 0.24; the combined genotypes, R-2 = 0.37 at multiple linear regression). Together with clinical factors, these genotypes accounted for 61% of the variability in adult height of patients with TS after rhGH therapy. Conclusion: Homozygosity for the GHR-exon3 full-length allele and/or the -202C-IGFBP3 allele are associated with less favorable short-and long-term growth outcomes after rhGH treatment in patients with TS. (J Clin Endocrinol Metab 97: E671-E677, 2012)
Resumo:
Bilayer segmentation of live video in uncontrolled environments is an essential task for home applications in which the original background of the scene must be replaced, as in videochats or traditional videoconference. The main challenge in such conditions is overcome all difficulties in problem-situations (e. g., illumination change, distract events such as element moving in the background and camera shake) that may occur while the video is being captured. This paper presents a survey of segmentation methods for background substitution applications, describes the main concepts and identifies events that may cause errors. Our analysis shows that although robust methods rely on specific devices (multiple cameras or sensors to generate depth maps) which aid the process. In order to achieve the same results using conventional devices (monocular video cameras), most current research relies on energy minimization frameworks, in which temporal and spacial information are probabilistically combined with those of color and contrast.
Resumo:
Abstract Background Atherosclerosis causes millions of deaths, annually yielding billions in expenses round the world. Intravascular Optical Coherence Tomography (IVOCT) is a medical imaging modality, which displays high resolution images of coronary cross-section. Nonetheless, quantitative information can only be obtained with segmentation; consequently, more adequate diagnostics, therapies and interventions can be provided. Since it is a relatively new modality, many different segmentation methods, available in the literature for other modalities, could be successfully applied to IVOCT images, improving accuracies and uses. Method An automatic lumen segmentation approach, based on Wavelet Transform and Mathematical Morphology, is presented. The methodology is divided into three main parts. First, the preprocessing stage attenuates and enhances undesirable and important information, respectively. Second, in the feature extraction block, wavelet is associated with an adapted version of Otsu threshold; hence, tissue information is discriminated and binarized. Finally, binary morphological reconstruction improves the binary information and constructs the binary lumen object. Results The evaluation was carried out by segmenting 290 challenging images from human and pig coronaries, and rabbit iliac arteries; the outcomes were compared with the gold standards made by experts. The resultant accuracy was obtained: True Positive (%) = 99.29 ± 2.96, False Positive (%) = 3.69 ± 2.88, False Negative (%) = 0.71 ± 2.96, Max False Positive Distance (mm) = 0.1 ± 0.07, Max False Negative Distance (mm) = 0.06 ± 0.1. Conclusions In conclusion, by segmenting a number of IVOCT images with various features, the proposed technique showed to be robust and more accurate than published studies; in addition, the method is completely automatic, providing a new tool for IVOCT segmentation.
Resumo:
OBJECTIVE: To propose an automatic brain tumor segmentation system. METHODS: The system used texture characteristics as its main source of information for segmentation. RESULTS: The mean correct match was 94% of correspondence between the segmented areas and ground truth. CONCLUSION: Final results showed that the proposed system was able to find and delimit tumor areas without requiring any user interaction.
Resumo:
The parenchymal distribution of the splenic artery was studied in order to obtain anatomical basis for partial splenectomy. Thirty two spleens were studied, 26 spleens of healthy horses weighing 320 to 450kg, aged 3 to 12 years and 6 spleens of fetus removed from slaughterhouse. The spleens were submitted to arteriography and scintigraphy in order to have their vascular pattern examined and compared to the external aspect of the organ aiming establish anatomo-surgical segments. All radiographs were photographed with a digital camera and the digital images were submitted to a measuring system for comparative analysis of areas of dorsal and ventral anatomo-surgical segments. Anatomical investigations into the angioarchitecture of the equine spleen showed a paucivascular area, which coincides with a thinner external area, allowing the organ to be divided in two anatomo-surgical segments of approximately 50% of the organ each.
Resumo:
Recently there has been a considerable interest in dynamic textures due to the explosive growth of multimedia databases. In addition, dynamic texture appears in a wide range of videos, which makes it very important in applications concerning to model physical phenomena. Thus, dynamic textures have emerged as a new field of investigation that extends the static or spatial textures to the spatio-temporal domain. In this paper, we propose a novel approach for dynamic texture segmentation based on automata theory and k-means algorithm. In this approach, a feature vector is extracted for each pixel by applying deterministic partially self-avoiding walks on three orthogonal planes of the video. Then, these feature vectors are clustered by the well-known k-means algorithm. Although the k-means algorithm has shown interesting results, it only ensures its convergence to a local minimum, which affects the final result of segmentation. In order to overcome this drawback, we compare six methods of initialization of the k-means. The experimental results have demonstrated the effectiveness of our proposed approach compared to the state-of-the-art segmentation methods.
Resumo:
Dynamic texture is a recent field of investigation that has received growing attention from computer vision community in the last years. These patterns are moving texture in which the concept of selfsimilarity for static textures is extended to the spatiotemporal domain. In this paper, we propose a novel approach for dynamic texture representation, that can be used for both texture analysis and segmentation. In this method, deterministic partially self-avoiding walks are performed in three orthogonal planes of the video in order to combine appearance and motion features. We validate our method on three applications of dynamic texture that present interesting challenges: recognition, clustering and segmentation. Experimental results on these applications indicate that the proposed method improves the dynamic texture representation compared to the state of the art.
Resumo:
[ES] En este trabajo se presenta el diseño de una herramienta multimedia que traduce a la lengua de signos españolas los mensajes de avisos que puede proporcionar un sistema de megafonía. El objetivo del trabajo es proporcionar una herramienta que mejore la inclusión social de las personas con discapacidades auditivas. Con este propósito, se han seleccionado el entorno y los mensajes de audio habituales en un aeropuerto para desarrollar este proyecto piloto. Por último, los audios se han traducido a lengua de signos españolas sintetizando un avatar usando la técnica de animación de rotoscopía a partir de la grabación en vídeo de un traductor. Los resultados finales han sido evaluados por personas sordas.
Resumo:
This thesis proposes a new document model, according to which any document can be segmented in some independent components and transformed in a pattern-based projection, that only uses a very small set of objects and composition rules. The point is that such a normalized document expresses the same fundamental information of the original one, in a simple, clear and unambiguous way. The central part of my work consists of discussing that model, investigating how a digital document can be segmented, and how a segmented version can be used to implement advanced tools of conversion. I present seven patterns which are versatile enough to capture the most relevant documents’ structures, and whose minimality and rigour make that implementation possible. The abstract model is then instantiated into an actual markup language, called IML. IML is a general and extensible language, which basically adopts an XHTML syntax, able to capture a posteriori the only content of a digital document. It is compared with other languages and proposals, in order to clarify its role and objectives. Finally, I present some systems built upon these ideas. These applications are evaluated in terms of users’ advantages, workflow improvements and impact over the overall quality of the output. In particular, they cover heterogeneous content management processes: from web editing to collaboration (IsaWiki and WikiFactory), from e-learning (IsaLearning) to professional printing (IsaPress).
Resumo:
Matita (that means pencil in Italian) is a new interactive theorem prover under development at the University of Bologna. When compared with state-of-the-art proof assistants, Matita presents both traditional and innovative aspects. The underlying calculus of the system, namely the Calculus of (Co)Inductive Constructions (CIC for short), is well-known and is used as the basis of another mainstream proof assistant—Coq—with which Matita is to some extent compatible. In the same spirit of several other systems, proof authoring is conducted by the user as a goal directed proof search, using a script for storing textual commands for the system. In the tradition of LCF, the proof language of Matita is procedural and relies on tactic and tacticals to proceed toward proof completion. The interaction paradigm offered to the user is based on the script management technique at the basis of the popularity of the Proof General generic interface for interactive theorem provers: while editing a script the user can move forth the execution point to deliver commands to the system, or back to retract (or “undo”) past commands. Matita has been developed from scratch in the past 8 years by several members of the Helm research group, this thesis author is one of such members. Matita is now a full-fledged proof assistant with a library of about 1.000 concepts. Several innovative solutions spun-off from this development effort. This thesis is about the design and implementation of some of those solutions, in particular those relevant for the topic of user interaction with theorem provers, and of which this thesis author was a major contributor. Joint work with other members of the research group is pointed out where needed. The main topics discussed in this thesis are briefly summarized below. Disambiguation. Most activities connected with interactive proving require the user to input mathematical formulae. Being mathematical notation ambiguous, parsing formulae typeset as mathematicians like to write down on paper is a challenging task; a challenge neglected by several theorem provers which usually prefer to fix an unambiguous input syntax. Exploiting features of the underlying calculus, Matita offers an efficient disambiguation engine which permit to type formulae in the familiar mathematical notation. Step-by-step tacticals. Tacticals are higher-order constructs used in proof scripts to combine tactics together. With tacticals scripts can be made shorter, readable, and more resilient to changes. Unfortunately they are de facto incompatible with state-of-the-art user interfaces based on script management. Such interfaces indeed do not permit to position the execution point inside complex tacticals, thus introducing a trade-off between the usefulness of structuring scripts and a tedious big step execution behavior during script replaying. In Matita we break this trade-off with tinycals: an alternative to a subset of LCF tacticals which can be evaluated in a more fine-grained manner. Extensible yet meaningful notation. Proof assistant users often face the need of creating new mathematical notation in order to ease the use of new concepts. The framework used in Matita for dealing with extensible notation both accounts for high quality bidimensional rendering of formulae (with the expressivity of MathMLPresentation) and provides meaningful notation, where presentational fragments are kept synchronized with semantic representation of terms. Using our approach interoperability with other systems can be achieved at the content level, and direct manipulation of formulae acting on their rendered forms is possible too. Publish/subscribe hints. Automation plays an important role in interactive proving as users like to delegate tedious proving sub-tasks to decision procedures or external reasoners. Exploiting the Web-friendliness of Matita we experimented with a broker and a network of web services (called tutors) which can try independently to complete open sub-goals of a proof, currently being authored in Matita. The user receives hints from the tutors on how to complete sub-goals and can interactively or automatically apply them to the current proof. Another innovative aspect of Matita, only marginally touched by this thesis, is the embedded content-based search engine Whelp which is exploited to various ends, from automatic theorem proving to avoiding duplicate work for the user. We also discuss the (potential) reusability in other systems of the widgets presented in this thesis and how we envisage the evolution of user interfaces for interactive theorem provers in the Web 2.0 era.