16 resultados para Multiple Sources
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
It is contested that the mineral dust found in Greenlandic ice cores during the Holocene stems from multiple source areas. Particles entrained above a more productive, primary source dominate the signal’s multi-seasonal average. Data in sub-annual resolution, however, reveal at least one further source. Whereas distinct inputs from the primary source are visible in elevated concentration levels, various inputs of the secondary source(s) are reflected by multiple maxima in the coarse particle percentage. As long as the dust sources’ respective seasonal cycles are preserved, primary and secondary source can be distinguished. Since the two source’s ejecta eventually detected differ in size, which can be attributed to a change in atmospheric residence times, it is suggested that the secondary source is located in closer proximity to the drilling site than the primary one.
Resumo:
Java Enterprise Applications (JEAs) are large systems that integrate multiple technologies and programming languages. Transactions in JEAs simplify the development of code that deals with failure recovery and multi-user coordination by guaranteeing atomicity of sets of operations. The heterogeneous nature of JEAs, however, can obfuscate conceptual errors in the application code, and in particular can hide incorrect declarations of transaction scope. In this paper we present a technique to expose and analyze the application transaction scope in JEAs by merging and analyzing information from multiple sources. We also present several novel visualizations that aid in the analysis of transaction scope by highlighting anomalies in the specification of transactions and violations of architectural constraints. We have validated our approach on two versions of a large commercial case study.
Resumo:
We present a Rare Earth Elements (REE) record determined on the EPICA ice core drilled at Dronning Maud Land (EDML) in the Atlantic sector of the East Antarctic Plateau. The record covers the transition from the last glacial stage (LGS) to the early Holocene (26 600–7500 yr BP) at decadal to centennial resolution. Additionally, samples from potential source areas (PSAs) for Antarctic dust were analyzed for their REE characteristics. The dust provenance is discussed by comparing the REE fingerprints in the ice core and the PSA samples. We find a shift in variability in REE composition at ~15 000 yr BP in the ice core samples. Before 15 000 yr BP, the dust composition is very uniform and its provenance was most certainly dominated by a South American source. After 15 000 yr BP, multiple sources such as Australia and New Zealand become relatively more important, although South America remains the major dust source. A similar change in the dust characteristics was observed in the EPICA Dome C ice core at around ~15 000 yr BP, accompanied by a shift in the REE composition, thus suggesting a change of atmospheric circulation in the Southern Hemisphere.
Resumo:
Timing divergence events allow us to infer the conditions under which biodiversity has evolved and gain important insights into the mechanisms driving evolution. Cichlid fishes are a model system for studying speciation and adaptive radiation, yet, we have lacked reliable timescales for their evolution. Phylogenetic reconstructions are consistent with cichlid origins prior to Gondwanan landmass fragmentation 121-165 MYA, considerably earlier than the first known fossil cichlids (Eocene). We examined the timing of cichlid evolution using a relaxed molecular clock calibrated with geological estimates for the ages of 1) Gondwanan fragmentation and 2) cichlid fossils. Timescales of cichlid evolution derived from fossil-dated phylogenies of other bony fishes most closely matched those suggested by Gondwanan breakup calibrations, suggesting the Eocene origins and marine dispersal implied by the cichlid fossil record may be due to its incompleteness. Using Gondwanan calibrations, we found accumulation of genetic diversity within the radiating lineages of the African Lakes Malawi, Victoria and Barombi Mbo, and Palaeolake Makgadikgadi began around or after the time of lake basin formation. These calibrations also suggest Lake Tanganyika was colonized independently by the major radiating cichlid tribes that then began to accumulate genetic diversity thereafter. These results contrast with the widely accepted theory that diversification into major lineages took place within the Tanganyika basin. Together, this evidence suggests that ancient lake habitats have played a key role in generating and maintaining diversity within radiating lineages and also that lakes may have captured preexisting cichlid diversity from multiple sources from which adaptive radiations have evolved.
Resumo:
Verbal thoughts (such as negative cognitions) and sensory phenomena (such as visual mental imagery) are usually conceptualised as distinct mental experiences. The present study examined to what extent depressive thoughts are accompanied by sensory experiences and how this is associated with symptom severity, insight of illness and quality of life. A large sample of mildly to moderately depressed patients (N = 356) was recruited from multiple sources and asked about sensory properties of their depressive thoughts in an online study. Diagnostic status and symptom severity were established over a telephone interview with trained raters. Sensory properties of negative thoughts were reported by 56.5% of the sample (i.e., sensation in at least one sensory modality). The highest prevalence was seen for bodily (39.6%) followed by auditory (30.6%) and visual (27.2%) sensations. Patients reporting sensory properties of thoughts showed more severe psychopathological symptoms than those who did not. The degree of perceptuality was marginally associated with quality of life. The findings support the notion that depressive thoughts are not only verbal but commonly accompanied by sensory experiences. The perceptuality of depressive thoughts and the resulting sense of authenticity may contribute to the emotional impact and pervasiveness of such thoughts, making them difficult to dismiss for their holder.
Resumo:
Objectives: To update the 2006 systematic review of the comparative benefits and harms of erythropoiesis-stimulating agent (ESA) strategies and non-ESA strategies to manage anemia in patients undergoing chemotherapy and/or radiation for malignancy (excluding myelodysplastic syndrome and acute leukemia), including the impact of alternative thresholds for initiating treatment and optimal duration of therapy. Data sources: Literature searches were updated in electronic databases (n=3), conference proceedings (n=3), and Food and Drug Administration transcripts. Multiple sources (n=13) were searched for potential gray literature. A primary source for current survival evidence was a recently published individual patient data meta-analysis. In that meta-analysis, patient data were obtained from investigators for studies enrolling more than 50 patients per arm. Because those data constitute the most currently available data for this update, as well as the source for on-study (active treatment) mortality data, we limited inclusion in the current report to studies enrolling more than 50 patients per arm to avoid potential differential endpoint ascertainment in smaller studies. Review methods: Title and abstract screening was performed by one or two (to resolve uncertainty) reviewers; potentially included publications were reviewed in full text. Two or three (to resolve disagreements) reviewers assessed trial quality. Results were independently verified and pooled for outcomes of interest. The balance of benefits and harms was examined in a decision model. Results: We evaluated evidence from 5 trials directly comparing darbepoetin with epoetin, 41 trials comparing epoetin with control, and 8 trials comparing darbepoetin with control; 5 trials evaluated early versus late (delay until Hb ≤9 to 11 g/dL) treatment. Trials varied according to duration, tumor types, cancer therapy, trial quality, iron supplementation, baseline hemoglobin, ESA dosing frequency (and therefore amount per dose), and dose escalation. ESAs decreased the risk of transfusion (pooled relative risk [RR], 0.58; 95% confidence interval [CI], 0.53 to 0.64; I2 = 51%; 38 trials) without evidence of meaningful difference between epoetin and darbepoetin. Thromboembolic event rates were higher in ESA-treated patients (pooled RR, 1.51; 95% CI, 1.30 to 1.74; I2 = 0%; 37 trials) without difference between epoetin and darbepoetin. In 14 trials reporting the Functional Assessment of Cancer Therapy (FACT)-Fatigue subscale, the most common patient-reported outcome, scores decreased by −0.6 in control arms (95% CI, −6.4 to 5.2; I2 = 0%) and increased by 2.1 in ESA arms (95% CI, −3.9 to 8.1; I2 = 0%). There were fewer thromboembolic and on-study mortality adverse events when ESA treatment was delayed until baseline Hb was less than 10 g/dL, in keeping with current treatment practice, but the difference in effect from early treatment was not significant, and the evidence was limited and insufficient for conclusions. No evidence informed optimal duration of therapy. Mortality was increased during the on-study period (pooled hazard ratio [HR], 1.17; 95% CI, 1.04 to 1.31; I2 = 0%; 37 trials). There was one additional death for every 59 treated patients when the control arm on-study mortality was 10 percent and one additional death for every 588 treated patients when the control-arm on-study mortality was 1 percent. A cohort decision model yielded a consistent result—greater loss of life-years when control arm on-study mortality was higher. There was no discernible increase in mortality with ESA use over the longest available followup (pooled HR, 1.04; 95% CI, 0.99 to 1.10; I2 = 38%; 44 trials), but many trials did not include an overall survival endpoint and potential time-dependent confounding was not considered. Conclusions: Results of this update were consistent with the 2006 review. ESAs reduced the need for transfusions and increased the risk of thromboembolism. FACT-Fatigue scores were better with ESA use but the magnitude was less than the minimal clinically important difference. An increase in mortality accompanied the use of ESAs. An important unanswered question is whether dosing practices and overall ESA exposure might influence harms.
Resumo:
Intra-session network coding has been shown to offer significant gains in terms of achievable throughput and delay in settings where one source multicasts data to several clients. In this paper, we consider a more general scenario where multiple sources transmit data to sets of clients over a wireline overlay network. We propose a novel framework for efficient rate allocation in networks where intermediate network nodes have the opportunity to combine packets from different sources using randomized network coding. We formulate the problem as the minimization of the average decoding delay in the client population and solve it with a gradient-based stochastic algorithm. Our optimized inter-session network coding solution is evaluated in different network topologies and is compared with basic intra-session network coding solutions. Our results show the benefits of proper coding decisions and effective rate allocation for lowering the decoding delay when the network is used by concurrent multicast sessions.
Resumo:
BackgroundConsensus-based approaches provide an alternative to evidence-based decision making, especially in situations where high-level evidence is limited. Our aim was to demonstrate a novel source of information, objective consensus based on recommendations in decision tree format from multiple sources.MethodsBased on nine sample recommendations in decision tree format a representative analysis was performed. The most common (mode) recommendations for each eventuality (each permutation of parameters) were determined. The same procedure was applied to real clinical recommendations for primary radiotherapy for prostate cancer. Data was collected from 16 radiation oncology centres, converted into decision tree format and analyzed in order to determine the objective consensus.ResultsBased on information from multiple sources in decision tree format, treatment recommendations can be assessed for every parameter combination. An objective consensus can be determined by means of mode recommendations without compromise or confrontation among the parties. In the clinical example involving prostate cancer therapy, three parameters were used with two cut-off values each (Gleason score, PSA, T-stage) resulting in a total of 27 possible combinations per decision tree. Despite significant variations among the recommendations, a mode recommendation could be found for specific combinations of parameters.ConclusionRecommendations represented as decision trees can serve as a basis for objective consensus among multiple parties.
Resumo:
Content-Centric Networking (CCN) naturally supports multi-path communication, as it allows the simultaneous use of multiple interfaces (e.g. LTE and WiFi). When multiple sources and multiple clients are considered, the optimal set of distribution trees should be determined in order to optimally use all the available interfaces. This is not a trivial task, as it is a computationally intense procedure that should be done centrally. The need for central coordination can be removed by employing network coding, which also offers improved resiliency to errors and large throughput gains. In this paper, we propose NetCodCCN, a protocol for integrating network coding in CCN. In comparison to previous works proposing to enable network coding in CCN, NetCodCCN permit Interest aggregation and Interest pipelining, which reduce the data retrieval times. The experimental evaluation shows that the proposed protocol leads to significant improvements in terms of content retrieval delay compared to the original CCN. Our results demonstrate that the use of network coding adds robustness to losses and permits to exploit more efficiently the available network resources. The performance gains are verified for content retrieval in various network scenarios.
Resumo:
Syndromic surveillance (SyS) systems currently exploit various sources of health-related data, most of which are collected for purposes other than surveillance (e.g. economic). Several European SyS systems use data collected during meat inspection for syndromic surveillance of animal health, as some diseases may be more easily detected post-mortem than at their point of origin or during the ante-mortem inspection upon arrival at the slaughterhouse. In this paper we use simulation to evaluate the performance of a quasi-Poisson regression (also known as an improved Farrington) algorithm for the detection of disease outbreaks during post-mortem inspection of slaughtered animals. When parameterizing the algorithm based on the retrospective analyses of 6 years of historic data, the probability of detection was satisfactory for large (range 83-445 cases) outbreaks but poor for small (range 20-177 cases) outbreaks. Varying the amount of historical data used to fit the algorithm can help increasing the probability of detection for small outbreaks. However, while the use of a 0·975 quantile generated a low false-positive rate, in most cases, more than 50% of outbreak cases had already occurred at the time of detection. High variance observed in the whole carcass condemnations time-series, and lack of flexibility in terms of the temporal distribution of simulated outbreaks resulting from low reporting frequency (monthly), constitute major challenges for early detection of outbreaks in the livestock population based on meat inspection data. Reporting frequency should be increased in the future to improve timeliness of the SyS system while increased sensitivity may be achieved by integrating meat inspection data into a multivariate system simultaneously evaluating multiple sources of data on livestock health.
Resumo:
Zeki and co-workers recently proposed that perception can best be described as locally distributed, asynchronous processes that each create a kind of microconsciousness, which condense into an experienced percept. The present article is aimed at extending this theory to metacognitive feelings. We present evidence that perceptual fluency-the subjective feeling of ease during perceptual processing-is based on speed of processing at different stages of the perceptual process. Specifically, detection of briefly presented stimuli was influenced by figure-ground contrast, but not by symmetry (Experiment 1) or the font (Experiment 2) of the stimuli. Conversely, discrimination of these stimuli was influenced by whether they were symmetric (Experiment 1) and by the font they were presented in (Experiment 2), but not by figure-ground contrast. Both tasks however were related with the subjective experience of fluency (Experiments 1 and 2). We conclude that subjective fluency is the conscious phenomenal correlate of different processing stages in visual perception.
Resumo:
OBJECTIVE: To determine the accuracy of magnetic resonance imaging criteria for the early diagnosis of multiple sclerosis in patients with suspected disease. DESIGN: Systematic review. DATA SOURCES: 12 electronic databases, citation searches, and reference lists of included studies. Review methods Studies on accuracy of diagnosis that compared magnetic resonance imaging, or diagnostic criteria incorporating such imaging, to a reference standard for the diagnosis of multiple sclerosis. RESULTS: 29 studies (18 cohort studies, 11 other designs) were included. On average, studies of other designs (mainly diagnostic case-control studies) produced higher estimated diagnostic odds ratios than did cohort studies. Among 15 studies of higher methodological quality (cohort design, clinical follow-up as reference standard), those with longer follow-up produced higher estimates of specificity and lower estimates of sensitivity. Only two such studies followed patients for more than 10 years. Even in the presence of many lesions (> 10 or > 8), magnetic resonance imaging could not accurately rule multiple sclerosis in (likelihood ratio of a positive test result 3.0 and 2.0, respectively). Similarly, the absence of lesions was of limited utility in ruling out a diagnosis of multiple sclerosis (likelihood ratio of a negative test result 0.1 and 0.5). CONCLUSIONS: Many evaluations of the accuracy of magnetic resonance imaging for the early detection of multiple sclerosis have produced inflated estimates of test performance owing to methodological weaknesses. Use of magnetic resonance imaging to confirm multiple sclerosis on the basis of a single attack of neurological dysfunction may lead to over-diagnosis and over-treatment.
Resumo:
A multiple source model (MSM) for the 6 MV beam of a Varian Clinac 2300 C/D was developed by simulating radiation transport through the accelerator head for a set of square fields using the GEANT Monte Carlo (MC) code. The corresponding phase space (PS) data enabled the characterization of 12 sources representing the main components of the beam defining system. By parametrizing the source characteristics and by evaluating the dependence of the parameters on field size, it was possible to extend the validity of the model to arbitrary rectangular fields which include the central 3 x 3 cm2 field without additional precalculated PS data. Finally, a sampling procedure was developed in order to reproduce the PS data. To validate the MSM, the fluence, energy fluence and mean energy distributions determined from the original and the reproduced PS data were compared and showed very good agreement. In addition, the MC calculated primary energy spectrum was verified by an energy spectrum derived from transmission measurements. Comparisons of MC calculated depth dose curves and profiles, using original and PS data reproduced by the MSM, agree within 1% and 1 mm. Deviations from measured dose distributions are within 1.5% and 1 mm. However, the real beam leads to some larger deviations outside the geometrical beam area for large fields. Calculated output factors in 10 cm water depth agree within 1.5% with experimentally determined data. In conclusion, the MSM produces accurate PS data for MC photon dose calculations for the rectangular fields specified.
Resumo:
In a prospective memory task responding to a prospective memory target involves switching between ongoing and prospective memory task which can result in a slowing of subsequent ongoing task performance (i.e., an after-effect). Moreover, a slowing can also occur when prospective memory targets occur after the prospective memory task is deactivated (i.e., another after-effect). In this study, we investigated both after-effects within the same study. Moreover, we also tested whether the latter after-effects even occur on subsequent ongoing task trials. The results show, in fact, after-effects of all kinds. Thus, (1) correctly responding to prospective memory targets results in after-effects, a so far neglected cost on ongoing task performance, (2) responding to deactivated prospective memory targets also slows down performance, probably due to the involuntary retrieval of the intention, and (3) this slowing is present even on subsequent ongoing task trials, suggesting that even deactivated intentions are sufficient to induce a conflict that requires subsequent adaptation. Overall, these results indicate that performance slowing in a prospective memory experiment includes various kinds of sources, not only monitoring cost, and these sources may be understood best in terms of conflict adaptation.