977 resultados para MULTIPLE ACCESS INTERFERENCE
Resumo:
Animals can compete for resources by displaying various acoustic signals that may differentially affect the outcome of competition. We propose the hypothesis that the most efficient signal to deter opponents should be the one that most honestly reveals motivation to compete. We tested this hypothesis in the barn owl (Tyto alba) in which nestlings produce more calls of longer duration than siblings to compete for priority access to the indivisible prey item their parents will deliver next. Because nestlings increase call rate to a larger extent than call duration when they become hungrier, call rate would signal more accurately hunger level. This leads us to propose three predictions. First, a high number of calls should be more efficient in deterring siblings to compete than long calls. Second, the rate at which an individual calls should be more sensitive to variation in the intensity of the sibling vocal competition than the duration of its calls. Third, call rate should influence competitors' vocalization for a longer period of time than call duration. To test these three predictions we performed playback experiments by broadcasting to singleton nestlings calls of varying durations and at different rates. According to the first prediction, singleton nestlings became less vocal to a larger extent when we broadcasted more calls compared to longer calls. In line with the second prediction, nestlings reduced vocalization rate to a larger extent than call duration when we broadcasted more or longer calls. Finally, call rate had a longer influence on opponent's vocal behavior than call duration. Young animals thus actively and differentially use multiple signaling components to compete with their siblings over parental resources.
Resumo:
This article introduces a new interface for T-Coffee, a consistency-based multiple sequence alignment program. This interface provides an easy and intuitive access to the most popular functionality of the package. These include the default T-Coffee mode for protein and nucleic acid sequences, the M-Coffee mode that allows combining the output of any other aligners, and template-based modes of T-Coffee that deliver high accuracy alignments while using structural or homology derived templates. These three available template modes are Expresso for the alignment of protein with a known 3D-Structure, R-Coffee to align RNA sequences with conserved secondary structures and PSI-Coffee to accurately align distantly related sequences using homology extension. The new server benefits from recent improvements of the T-Coffee algorithm and can align up to 150 sequences as long as 10 000 residues and is available from both http://www.tcoffee.org and its main mirror http://tcoffee.crg.cat.
Resumo:
Background: Information about the composition of regulatory regions is of great value for designing experiments to functionally characterize gene expression. The multiplicity of available applications to predict transcription factor binding sites in a particular locus contrasts with the substantial computational expertise that is demanded to manipulate them, which may constitute a potential barrier for the experimental community. Results: CBS (Conserved regulatory Binding Sites, http://compfly.bio.ub.es/CBS) is a public platform of evolutionarily conserved binding sites and enhancers predicted in multiple Drosophila genomes that is furnished with published chromatin signatures associated to transcriptionally active regions and other experimental sources of information. The rapid access to this novel body of knowledge through a user-friendly web interface enables non-expert users to identify the binding sequences available for any particular gene, transcription factor, or genome region. Conclusions: The CBS platform is a powerful resource that provides tools for data mining individual sequences and groups of co-expressed genes with epigenomics information to conduct regulatory screenings in Drosophila.
Resumo:
The role of grammatical class in lexical access and representation is still not well understood. Grammatical effects obtained in picture-word interference experiments have been argued to show the operation of grammatical constraints during lexicalization when syntactic integration is required by the task. Alternative views hold that the ostensibly grammatical effects actually derive from the coincidence of semantic and grammatical differences between lexical candidates. We present three picture-word interference experiments conducted in Spanish. In the first two, the semantic relatedness (related or unrelated) and the grammatical class (nouns or verbs) of the target and the distracter were manipulated in an infinitive form action naming task in order to disentangle their contributions to verb lexical access. In the third experiment, a possible confound between grammatical class and semantic domain (objects or actions) was eliminated by using action-nouns as distracters. A condition in which participants were asked to name the action pictures using an inflected form of the verb was also included to explore whether the need of syntactic integration modulated the appearance of grammatical effects. Whereas action-words (nouns or verbs), but not object-nouns, produced longer reaction times irrespective of their grammatical class in the infinitive condition, only verbs slowed latencies in the inflected form condition. Our results suggest that speech production relies on the exclusion of candidate responses that do not fulfil task-pertinent criteria like membership in the appropriate semantic domain or grammatical class. Taken together, these findings are explained by a response-exclusion account of speech output. This and alternative hypotheses are discussed.
Resumo:
The provision of Internet access to large numbers has traditionally been under the control of operators, who have built closed access networks for connecting customers. As the access network (i.e. the last mile to the customer) is generally the most expensive part of the network because of the vast amount of cable required, many operators have been reluctant to build access networks in rural areas. There are problems also in urban areas, as incumbent operators may use various tactics to make it difficult for competitors to enter the market. Open access networking, where the goal is to connect multiple operators and other types of service providers to a shared network, changes the way in which networks are used. This change in network structure dismantles vertical integration in service provision and enables true competition as no service provider can prevent others fromcompeting in the open access network. This thesis describes the development from traditional closed access networks towards open access networking and analyses different types of open access solution. The thesis introduces a new open access network approach (The Lappeenranta Model) in greater detail. The Lappeenranta Model is compared to other types of open access networks. The thesis shows that end users and service providers see local open access and services as beneficial. In addition, the thesis discusses open access networking in a multidisciplinary fashion, focusing on the real-world challenges of open access networks.
Resumo:
Objective: To evaluate perioperative outcomes, safety and feasibility of video-assisted resection for primary and secondary liver lesions. Methods : From a prospective database, we analyzed the perioperative results (up to 90 days) of 25 consecutive patients undergoing video-assisted resections in the period between June 2007 and June 2013. Results : The mean age was 53.4 years (23-73) and 16 (64%) patients were female. Of the total, 84% were suffering from malignant diseases. We performed 33 resections (1 to 4 nodules per patient). The procedures performed were non-anatomical resections (n = 26), segmentectomy (n = 1), 2/3 bisegmentectomy (n = 1), 6/7 bisegmentectomy (n = 1), left hepatectomy (n = 2) and right hepatectomy (n = 2). The procedures contemplated postero-superior segments in 66.7%, requiring multiple or larger resections. The average operating time was 226 minutes (80-420), and anesthesia time, 360 minutes (200-630). The average size of resected nodes was 3.2 cm (0.8 to 10) and the surgical margins were free in all the analyzed specimens. Eight percent of patients needed blood transfusion and no case was converted to open surgery. The length of stay was 6.5 days (3-16). Postoperative complications occurred in 20% of patients, with no perioperative mortality. Conclusion : The video-assisted liver resection is feasible and safe and should be part of the liver surgeon armamentarium for resection of primary and secondary liver lesions.
Resumo:
Workshop at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
The aim of the present study was to evaluate the acidification of the endosome-lysosome system of renal epithelial cells after endocytosis of two human immunoglobulin lambda light chains (Bence-Jones proteins, BJP) obtained from patients with multiple myeloma. Renal epithelial cell handling of two BJP (neutral and acidic BJP) was evaluated by rhodamine fluorescence. Renal cells (MDCK) were maintained in culture and, when confluent, were incubated with rhodamine-labeled BJP for different periods of time. Photos were obtained with a fluorescence microscope (Axiolab-Zeiss). Labeling density was determined on slides with a densitometer (Shimadzu Dual-Wavelength Flying-Spot Scanner CS9000). Endocytosis of neutral and acidic BJP was correlated with acidic intracellular compartment distribution using acridine orange labeling. We compared the pattern of distribution after incubation of native neutral and acidic BJP and after complete deglycosylation of BJP by periodate oxidation. The subsequent alteration of pI converted neutral BJP to acidic BJP. There was a significant accumulation of neutral BJP in endocytic structures, reduced lysosomal acidification, and a diffuse pattern of acidification. This pattern was reversed after total deglycosylation and subsequent alteration of the pI to an acidic BJP. We conclude that the physicochemical characteristics of BJP interfere with intracellular acidification, possibly explaining the strong nephrotoxicity of neutral BJP. Lysosomal acidification is fundamental for adequate protein processing and catabolism.
Resumo:
The discovery of double-stranded RNA-mediated gene silencing has rapidly led to its use as a method of choice for blocking a gene, and has turned it into one of the most discussed topics in cell biology. Although still in its infancy, the field of RNA interference has already produced a vast array of results, mainly in Caenorhabditis elegans, but recently also in mammalian systems. Micro-RNAs are short hairpins of RNA capable of blocking translation, which are transcribed from genomic DNA and are implicated in several aspects from development to cell signaling. The present review discusses the main methods used for gene silencing in cell culture and animal models, including the selection of target sequences, delivery methods and strategies for a successful silencing. Expected developments are briefly discussed, ranging from reverse genetics to therapeutics. Thus, the development of the new paradigm of RNA-mediated gene silencing has produced two important advances: knowledge of a basic cellular mechanism present in the majority of eukaryotic cells and access to a potent and specific new method for gene silencing.
Resumo:
Affiliation: Louise Lafortune: Faculté de médecine, Université de Montréal
Resumo:
Scheduling tasks to efficiently use the available processor resources is crucial to minimizing the runtime of applications on shared-memory parallel processors. One factor that contributes to poor processor utilization is the idle time caused by long latency operations, such as remote memory references or processor synchronization operations. One way of tolerating this latency is to use a processor with multiple hardware contexts that can rapidly switch to executing another thread of computation whenever a long latency operation occurs, thus increasing processor utilization by overlapping computation with communication. Although multiple contexts are effective for tolerating latency, this effectiveness can be limited by memory and network bandwidth, by cache interference effects among the multiple contexts, and by critical tasks sharing processor resources with less critical tasks. This thesis presents techniques that increase the effectiveness of multiple contexts by intelligently scheduling threads to make more efficient use of processor pipeline, bandwidth, and cache resources. This thesis proposes thread prioritization as a fundamental mechanism for directing the thread schedule on a multiple-context processor. A priority is assigned to each thread either statically or dynamically and is used by the thread scheduler to decide which threads to load in the contexts, and to decide which context to switch to on a context switch. We develop a multiple-context model that integrates both cache and network effects, and shows how thread prioritization can both maintain high processor utilization, and limit increases in critical path runtime caused by multithreading. The model also shows that in order to be effective in bandwidth limited applications, thread prioritization must be extended to prioritize memory requests. We show how simple hardware can prioritize the running of threads in the multiple contexts, and the issuing of requests to both the local memory and the network. Simulation experiments show how thread prioritization is used in a variety of applications. Thread prioritization can improve the performance of synchronization primitives by minimizing the number of processor cycles wasted in spinning and devoting more cycles to critical threads. Thread prioritization can be used in combination with other techniques to improve cache performance and minimize cache interference between different working sets in the cache. For applications that are critical path limited, thread prioritization can improve performance by allowing processor resources to be devoted preferentially to critical threads. These experimental results show that thread prioritization is a mechanism that can be used to implement a wide range of scheduling policies.
Resumo:
Stable isotopic characterization of chlorine in chlorinated aliphatic pollution is potentially very valuable for risk assessment and monitoring remediation or natural attenuation. The approach has been underused because of the complexity of analysis and the time it takes. We have developed a new method that eliminates sample preparation. Gas chromatography produces individually eluted sample peaks for analysis. The He carrier gas is mixed with Ar and introduced directly into the torch of a multicollector ICPMS. The MC-ICPMS is run at a high mass resolution of >= 10 000 to eliminate interference of mass 37 ArH with Cl. The standardization approach is similar to that for continuous flow stable isotope analysis in which sample and reference materials are measured successively. We have measured PCE relative to a laboratory TCE standard mixed with the sample. Solvent samples of 200 nmol to 1.3 mu mol ( 24- 165 mu g of Cl) were measured. The PCE gave the same value relative to the TCE as measured by the conventional method with a precision of 0.12% ( 2 x standard error) but poorer precision for the smaller samples.