993 resultados para digital delay-line interpolation
Resumo:
BACKGROUND AND STUDY AIMS: The current gold standard in Barrett's esophagus monitoring consists of four-quadrant biopsies every 1-2 cm in accordance with the Seattle protocol. Adding brush cytology processed by digital image cytometry (DICM) may further increase the detection of patients with Barrett's esophagus who are at risk of neoplasia. The aim of the present study was to assess the additional diagnostic value and accuracy of DICM when added to the standard histological analysis in a cross-sectional multicenter study of patients with Barrett's esophagus in Switzerland. METHODS: One hundred sixty-four patients with Barrett's esophagus underwent 239 endoscopies with biopsy and brush cytology. DICM was carried out on 239 cytology specimens. Measures of the test accuracy of DICM (relative risk, sensitivity, specificity, likelihood ratios) were obtained by dichotomizing the histopathology results (high-grade dysplasia or adenocarcinoma vs. all others) and DICM results (aneuploidy/intermediate pattern vs. diploidy). RESULTS: DICM revealed diploidy in 83% of 239 endoscopies, an intermediate pattern in 8.8%, and aneuploidy in 8.4%. An intermediate DICM result carried a relative risk (RR) of 12 and aneuploidy a RR of 27 for high-grade dysplasia/adenocarcinoma. Adding DICM to the standard biopsy protocol, a pathological cytometry result (aneuploid or intermediate) was found in 25 of 239 endoscopies (11%; 18 patients) with low-risk histology (no high-grade dysplasia or adenocarcinoma). During follow-up of 14 of these 18 patients, histological deterioration was seen in 3 (21%). CONCLUSION: DICM from brush cytology may add important information to a standard biopsy protocol by identifying a subgroup of BE-patients with high-risk cellular abnormalities.
Resumo:
In this paper we present the theoretical and methodologicalfoundations for the development of a multi-agentSelective Dissemination of Information (SDI) servicemodel that applies Semantic Web technologies for specializeddigital libraries. These technologies make possibleachieving more efficient information management,improving agent–user communication processes, andfacilitating accurate access to relevant resources. Othertools used are fuzzy linguistic modelling techniques(which make possible easing the interaction betweenusers and system) and natural language processing(NLP) techniques for semiautomatic thesaurus generation.Also, RSS feeds are used as “current awareness bulletins”to generate personalized bibliographic alerts.
Resumo:
This paper initially identifies the main transformations of the television system that are caused by digitalization. Its development in several broadcasting platforms is analyzed as well as the particular obstacles and requirements that are detected for each of them. Due to its technical characteristics and its historical link to the public services, the terrestrial network requires migration strategies different from those strictly commercial, and public intervention might be needed. The paper focuses on such migration strategies towards DTT and identifies the main issues for public intervention in the areas of the digital scenario: technology, business and market transformation and the reception field. Moreover, it describes and classifies the challenges that public broadcasters should confront due to digitalization. This paper finally concludes that the leadership of the public broadcasters during the migration towards DTT is an interesting tool for public policy. The need for foster the digitalization of the terrestrial platform and to achieve certain social and public goal besides the market interest brings an opportunity for public institutions and public broadcasters to work together. That leading role could also be positive for the public service to face its necessary redefinition and reallocation within the digital context.
Resumo:
The vision-for-action literature favours the idea that the motor output of an action - whether manual or oculomotor - leads to similar results regarding object handling. Findings on line bisection performance challenge this idea: healthy individuals bisect lines manually to the left of centre, and to the right of centre when using eye fixation. In case that these opposite biases for manual and oculomotor action reflect more universal compensatory mechanisms that cancel each other out to enhance overall accuracy, one would like to observe comparable opposite biases for other material. In the present study, we report on three independent experiments in which we tested line bisection (by hand, by eye fixation) not only for solid lines, but also for letter lines; the latter, when bisected manually, is known to result in a rightward bias. Accordingly, we expected a leftward bias for letter lines when bisected via eye fixation. Analysis of bisection biases provided evidence for this idea: manual bisection was more rightward for letter as compared to solid lines, while bisection by eye fixation was more leftward for letter as compared to solid lines. Support for the eye fixation observation was particularly obvious in two of the three studies, for which comparability between eye and hand action was increasingly adjusted (paper-pencil versus touch screen for manual action). These findings question the assumption that ocular motor and manual output are always inter-changeable, but rather suggest that at least for some situations ocular motor and manual output biases are orthogonal to each other, possibly balancing each other out.
Resumo:
Water balance is achieved through the ability of the kidney to control water reabsorption in the connecting tubule and the collecting duct. In a mouse cortical collecting duct cell line (mCCD(c11)), physiological concentrations of arginine vasopressin increased both electrogenic, amiloride-sensitive, epithelial sodium channel (ENaC)-mediated sodium transport measured by the short-circuit current (Isc) method and water flow (Jv apical to basal) measured by gravimetry with similar activation coefficient K(1/2) (6 and 12 pM, respectively). Jv increased linearly according to the osmotic gradient across the monolayer. A small but highly significant Jv was also measured under isoosmotic conditions. To test the coupling between sodium reabsorption and water flow, mCCD(c11) cells were treated for 24 h under isoosmotic condition with either diluent, amiloride, vasopressin or vasopressin and amiloride. Isc, Jv, and net chemical sodium fluxes were measured across the same monolayers. Around 30% of baseline and 50% of vasopressin-induced water flow is coupled to an amiloride-sensitive, ENaC-mediated, electrogenic sodium transport, whereas the remaining flow is coupled to an amiloride-insensitive, nonelectrogenic sodium transport mediated by an unknown electroneutral transporter. The mCCD(c11) cell line is a first example of a mammalian tight epithelium allowing quantitative study of the coupling between sodium and water transport. Our data are consistent with the 'near isoosmotic' fluid transport model.
Resumo:
Background: The understanding of whole genome sequences in higher eukaryotes depends to a large degree on the reliable definition of transcription units including exon/intron structures, translated open reading frames (ORFs) and flanking untranslated regions. The best currently available chicken transcript catalog is the Ensembl build based on the mappings of a relatively small number of full length cDNAs and ESTs to the genome as well as genome sequence derived in silico gene predictions.Results: We use Long Serial Analysis of Gene Expression (LongSAGE) in bursal lymphocytes and the DT40 cell line to verify the quality and completeness of the annotated transcripts. 53.6% of the more than 38,000 unique SAGE tags (unitags) match to full length bursal cDNAs, the Ensembl transcript build or the genome sequence. The majority of all matching unitags show single matches to the genome, but no matches to the genome derived Ensembl transcript build. Nevertheless, most of these tags map close to the 3' boundaries of annotated Ensembl transcripts.Conclusions: These results suggests that rather few genes are missing in the current Ensembl chicken transcript build, but that the 3' ends of many transcripts may not have been accurately predicted. The tags with no match in the transcript sequences can now be used to improve gene predictions, pinpoint the genomic location of entirely missed transcripts and optimize the accuracy of gene finder software.
Resumo:
Demosaicking is a particular case of interpolation problems where, from a scalar image in which each pixel has either the red, the green or the blue component, we want to interpolate the full-color image. State-of-the-art demosaicking algorithms perform interpolation along edges, but these edges are estimated locally. We propose a level-set-based geometric method to estimate image edges, inspired by the image in-painting literature. This method has a time complexity of O(S) , where S is the number of pixels in the image, and compares favorably with the state-of-the-art algorithms both visually and in most relevant image quality measures.
Resumo:
In the past, sensors networks in cities have been limited to fixed sensors, embedded in particular locations, under centralised control. Today, new applications can leverage wireless devices and use them as sensors to create aggregated information. In this paper, we show that the emerging patterns unveiled through the analysis of large sets of aggregated digital footprints can provide novel insights into how people experience the city and into some of the drivers behind these emerging patterns. We particularly explore the capacity to quantify the evolution of the attractiveness of urban space with a case study of in the area of the New York City Waterfalls, a public art project of four man-made waterfalls rising from the New York Harbor. Methods to study the impact of an event of this nature are traditionally based on the collection of static information such as surveys and ticket-based people counts, which allow to generate estimates about visitors’ presence in specific areas over time. In contrast, our contribution makes use of the dynamic data that visitors generate, such as the density and distribution of aggregate phone calls and photos taken in different areas of interest and over time. Our analysis provides novel ways to quantify the impact of a public event on the distribution of visitors and on the evolution of the attractiveness of the points of interest in proximity. This information has potential uses for local authorities, researchers, as well as service providers such as mobile network operators.
Resumo:
We investigate the problem of finding minimum-distortion policies for streaming delay-sensitive but distortion-tolerant data. We consider cross-layer approaches which exploit the coupling between presentation and transport layers. We make the natural assumption that the distortion function is convex and decreasing. We focus on a single source-destination pair and analytically find the optimum transmission policy when the transmission is done over an error-free channel. This optimum policy turns out to be independent of the exact form of the convex and decreasing distortion function. Then, for a packet-erasure channel, we analytically find the optimum open-loop transmission policy, which is also independent of the form of the convex distortion function. We then find computationally efficient closed-loop heuristic policies and show, through numerical evaluation, that they outperform the open-loop policy and have near optimal performance.
Resumo:
In this work we propose a new automatic methodology for computing accurate digital elevation models (DEMs) in urban environments from low baseline stereo pairs that shall be available in the future from a new kind of earth observation satellite. This setting makes both views of the scene similarly, thus avoiding occlusions and illumination changes, which are the main disadvantages of the commonly accepted large-baseline configuration. There still remain two crucial technological challenges: (i) precisely estimating DEMs with strong discontinuities and (ii) providing a statistically proven result, automatically. The first one is solved here by a piecewise affine representation that is well adapted to man-made landscapes, whereas the application of computational Gestalt theory introduces reliability and automation. In fact this theory allows us to reduce the number of parameters to be adjusted, and tocontrol the number of false detections. This leads to the selection of a suitable segmentation into affine regions (whenever possible) by a novel and completely automatic perceptual grouping method. It also allows us to discriminate e.g. vegetation-dominated regions, where such an affine model does not apply anda more classical correlation technique should be preferred. In addition we propose here an extension of the classical ”quantized” Gestalt theory to continuous measurements, thus combining its reliability with the precision of variational robust estimation and fine interpolation methods that are necessary in the low baseline case. Such an extension is very general and will be useful for many other applications as well.
Resumo:
In this paper a method for extracting semantic informationfrom online music discussion forums is proposed. The semantic relations are inferred from the co-occurrence of musical concepts in forum posts, using network analysis. The method starts by defining a dictionary of common music terms in an art music tradition. Then, it creates a complex network representation of the online forum by matchingsuch dictionary against the forum posts. Once the complex network is built we can study different network measures, including node relevance, node co-occurrence andterm relations via semantically connecting words. Moreover, we can detect communities of concepts inside the forum posts. The rationale is that some music terms are more related to each other than to other terms. All in all, this methodology allows us to obtain meaningful and relevantinformation from forum discussions.
Resumo:
Standards for the construction of full-depth patching in portland cement concrete pavement usually require replacement of all deteriorated based materials with crushed stone, up to the bottom of the existing pavement layer. In an effort to reduce the time of patch construction and costs, the Iowa Department of Transportation and the Department of Civil, Construction and Environmental Engineering at Iowa State University studied the use of extra concrete depth as an option for base construction. This report compares the impact of additional concrete patching material depth on rate of strength gain, potential for early opening to traffic, patching costs, and long-term patch performance. This report also compares those characteristics in terms of early setting and standard concrete mixes. The results have the potential to change the method of Portland cement concrete pavement patch construction in Iowa.
Resumo:
“Magic for a Pixeloscope” is a one hour show conceived to berepresented in a theater scenario that merges mixed and augmented reality (MR/AR) and full-body interaction with classical magic to create new tricks. The show was conceived by an interdisciplinary team composed by a magician, twointeraction designers, a theater director and a stage designer. Themagician uses custom based hardware and software to createnew illusions which are a starting point to explore new languagefor magical expression. In this paper we introduce a conceptualframework used to inform the design of different tricks; weexplore the design and production of some tricks included in theshow and we describe the feedback received on the world premiere and some of the conclusions obtained.
Resumo:
OBJECTIVE: This study was undertaken to determine the delay of extubation attributable to ventilator-associated pneumonia (VAP) in comparison to other complications and complexity of surgery after repair of congenital heart lesions in neonates and children. METHODS: Cohort study in a pediatric intensive care unit of a tertiary referral center. All patients who had cardiac operations during a 22-month period and who survived surgery were eligible (n = 272, median age 1.3 years). Primary outcome was time to successful extubation. Primary variable of interest was VAP Surgical procedures were classified according to complexity. Cox proportional hazards models were calculated to adjust for confounding. Potential confounders comprised other known risk factors for delayed extubation. RESULTS: Median time to extubation was 3 days. VAP occurred in 26 patients (9.6%). The rate of VAP was not associated with complexity of surgery (P = 0.22), or cardiopulmonary bypass (P = 0.23). The adjusted analysis revealed as further factors associated with delayed extubation: other respiratory complications (n = 28, chylothorax, airway stenosis, diaphragm paresis), prolonged inotropic support (n = 48, 17.6%), and the need for secondary surgery (n = 51, 18.8%; e.g., re-operation, secondary closure of thorax). Older age promoted early extubation. The median delay of extubation attributable to VAP was 3.7 days (hazards ratio HR = 0.29, 95% CI 0.18-0.49), exceeding the effect size of secondary surgery (HR = 0.48) and other respiratory complications (HR = 0.50). CONCLUSION: VAP accounts for a major delay of extubation in pediatric cardiac surgery.
Resumo:
This research report concerns about the post-doctoral activities, conducted betweenSeptember 2010 and March 2011 at the University Pompeu Fabra, Barcelona. It comes to identify the consequences of the convergence phenomenon on photojournalism.Thus, in a more general approach, the effort is to to recovery the structural elements of the convergence concept in journalism. It aims to map, as well, the current debates about the repositioning of photographic practices linked to the news produced in a widespread adoption of digital devices in contemporary workflow. It is also specified,the analysis of photographic collectives as a result of the convergence frameworkapplied to photojournalism; the debate on ways of funding; alternatives facing thealleged crisis of press photography and, finally, proposes to create qualifying stages ofdevelopment of photojournalism in the digital age as well as the proposition of hypotheses concerning the structure of the productive routines. In addition, we present three cases to be analyzed in order to explore and verify the occurrence ofcharacteristics that may identify the object of research in the state of practice. Finally,we work in a series of conclusions, revisiting the main hypotheses. With this strategy, ispossible to define an sequence of analysis capable of addressing the characteristics present in the studied cases and other ones in future, thus, be able to affirm this stage as a step, in the continuous historical course of photojournalism.