902 resultados para Density-based Scanning Algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract This thesis proposes a set of adaptive broadcast solutions and an adaptive data replication solution to support the deployment of P2P applications. P2P applications are an emerging type of distributed applications that are running on top of P2P networks. Typical P2P applications are video streaming, file sharing, etc. While interesting because they are fully distributed, P2P applications suffer from several deployment problems, due to the nature of the environment on which they perform. Indeed, defining an application on top of a P2P network often means defining an application where peers contribute resources in exchange for their ability to use the P2P application. For example, in P2P file sharing application, while the user is downloading some file, the P2P application is in parallel serving that file to other users. Such peers could have limited hardware resources, e.g., CPU, bandwidth and memory or the end-user could decide to limit the resources it dedicates to the P2P application a priori. In addition, a P2P network is typically emerged into an unreliable environment, where communication links and processes are subject to message losses and crashes, respectively. To support P2P applications, this thesis proposes a set of services that address some underlying constraints related to the nature of P2P networks. The proposed services include a set of adaptive broadcast solutions and an adaptive data replication solution that can be used as the basis of several P2P applications. Our data replication solution permits to increase availability and to reduce the communication overhead. The broadcast solutions aim, at providing a communication substrate encapsulating one of the key communication paradigms used by P2P applications: broadcast. Our broadcast solutions typically aim at offering reliability and scalability to some upper layer, be it an end-to-end P2P application or another system-level layer, such as a data replication layer. Our contributions are organized in a protocol stack made of three layers. In each layer, we propose a set of adaptive protocols that address specific constraints imposed by the environment. Each protocol is evaluated through a set of simulations. The adaptiveness aspect of our solutions relies on the fact that they take into account the constraints of the underlying system in a proactive manner. To model these constraints, we define an environment approximation algorithm allowing us to obtain an approximated view about the system or part of it. This approximated view includes the topology and the components reliability expressed in probabilistic terms. To adapt to the underlying system constraints, the proposed broadcast solutions route messages through tree overlays permitting to maximize the broadcast reliability. Here, the broadcast reliability is expressed as a function of the selected paths reliability and of the use of available resources. These resources are modeled in terms of quotas of messages translating the receiving and sending capacities at each node. To allow a deployment in a large-scale system, we take into account the available memory at processes by limiting the view they have to maintain about the system. Using this partial view, we propose three scalable broadcast algorithms, which are based on a propagation overlay that tends to the global tree overlay and adapts to some constraints of the underlying system. At a higher level, this thesis also proposes a data replication solution that is adaptive both in terms of replica placement and in terms of request routing. At the routing level, this solution takes the unreliability of the environment into account, in order to maximize reliable delivery of requests. At the replica placement level, the dynamically changing origin and frequency of read/write requests are analyzed, in order to define a set of replica that minimizes communication cost.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Optimal identification of subtle cognitive impairment in the primary care setting requires a very brief tool combining (a) patients' subjective impairments, (b) cognitive testing, and (c) information from informants. The present study developed a new, very quick and easily administered case-finding tool combining these assessments ('BrainCheck') and tested the feasibility and validity of this instrument in two independent studies. METHODS: We developed a case-finding tool comprised of patient-directed (a) questions about memory and depression and (b) clock drawing, and (c) the informant-directed 7-item version of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). Feasibility study: 52 general practitioners rated the feasibility and acceptance of the patient-directed tool. Validation study: An independent group of 288 Memory Clinic patients (mean ± SD age = 76.6 ± 7.9, education = 12.0 ± 2.6; 53.8% female) with diagnoses of mild cognitive impairment (n = 80), probable Alzheimer's disease (n = 185), or major depression (n = 23) and 126 demographically matched, cognitively healthy volunteer participants (age = 75.2 ± 8.8, education = 12.5 ± 2.7; 40% female) partook. All patient and healthy control participants were administered the patient-directed tool, and informants of 113 patient and 70 healthy control participants completed the very short IQCODE. RESULTS: Feasibility study: General practitioners rated the patient-directed tool as highly feasible and acceptable. Validation study: A Classification and Regression Tree analysis generated an algorithm to categorize patient-directed data which resulted in a correct classification rate (CCR) of 81.2% (sensitivity = 83.0%, specificity = 79.4%). Critically, the CCR of the combined patient- and informant-directed instruments (BrainCheck) reached nearly 90% (that is 89.4%; sensitivity = 97.4%, specificity = 81.6%). CONCLUSION: A new and very brief instrument for general practitioners, 'BrainCheck', combined three sources of information deemed critical for effective case-finding (that is, patients' subject impairments, cognitive testing, informant information) and resulted in a nearly 90% CCR. Thus, it provides a very efficient and valid tool to aid general practitioners in deciding whether patients with suspected cognitive impairments should be further evaluated or not ('watchful waiting').

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The occurrence of negative values for Fukui functions was studied through the electronegativity equalization method. Using algebraic relations between Fukui functions and different other conceptual DFT quantities on the one hand and the hardness matrix on the other hand, expressions were obtained for Fukui functions for several archetypical small molecules. Based on EEM calculations for large molecular sets, no negative Fukui functions were found

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of basis set superposition error (BSSE) on molecular complexes is analyzed. The BSSE causes artificial delocalizations which modify the first order electron density. The mechanism of this effect is assessed for the hydrogen fluoride dimer with several basis sets. The BSSE-corrected first-order electron density is obtained using the chemical Hamiltonian approach versions of the Roothaan and Kohn-Sham equations. The corrected densities are compared to uncorrected densities based on the charge density critical points. Contour difference maps between BSSE-corrected and uncorrected densities on the molecular plane are also plotted to gain insight into the effects of BSSE correction on the electron density

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A procedure based on quantum molecular similarity measures (QMSM) has been used to compare electron densities obtained from conventional ab initio and density functional methodologies at their respective optimized geometries. This method has been applied to a series of small molecules which have experimentally known properties and molecular bonds of diverse degrees of ionicity and covalency. Results show that in most cases the electron densities obtained from density functional methodologies are of a similar quality than post-Hartree-Fock generalized densities. For molecules where Hartree-Fock methodology yields erroneous results, the density functional methodology is shown to yield usually more accurate densities than those provided by the second order Møller-Plesset perturbation theory

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In silico screening has become a valuable tool in drug design, but some drug targets represent real challenges for docking algorithms. This is especially true for metalloproteins, whose interactions with ligands are difficult to parametrize. Our docking algorithm, EADock, is based on the CHARMM force field, which assures a physically sound scoring function and a good transferability to a wide range of systems, but also exhibits difficulties in case of some metalloproteins. Here, we consider the therapeutically important case of heme proteins featuring an iron core at the active site. Using a standard docking protocol, where the iron-ligand interaction is underestimated, we obtained a success rate of 28% for a test set of 50 heme-containing complexes with iron-ligand contact. By introducing Morse-like metal binding potentials (MMBP), which are fitted to reproduce density functional theory calculations, we are able to increase the success rate to 62%. The remaining failures are mainly due to specific ligand-water interactions in the X-ray structures. Testing of the MMBP on a second data set of non iron binders (14 cases) demonstrates that they do not introduce a spurious bias towards metal binding, which suggests that they may reliably be used also for cross-docking studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional RNA structures play an important role both in the context of noncoding RNA transcripts as well as regulatory elements in mRNAs. Here we present a computational study to detect functional RNA structures within the ENCODE regions of the human genome. Since structural RNAs in general lack characteristic signals in primary sequence, comparative approaches evaluating evolutionary conservation of structures are most promising. We have used three recently introduced programs based on either phylogenetic–stochastic context-free grammar (EvoFold) or energy directed folding (RNAz and AlifoldZ), yielding several thousand candidate structures (corresponding to ∼2.7% of the ENCODE regions). EvoFold has its highest sensitivity in highly conserved and relatively AU-rich regions, while RNAz favors slightly GC-rich regions, resulting in a relatively small overlap between methods. Comparison with the GENCODE annotation points to functional RNAs in all genomic contexts, with a slightly increased density in 3′-UTRs. While we estimate a significant false discovery rate of ∼50%–70% many of the predictions can be further substantiated by additional criteria: 248 loci are predicted by both RNAz and EvoFold, and an additional 239 RNAz or EvoFold predictions are supported by the (more stringent) AlifoldZ algorithm. Five hundred seventy RNAz structure predictions fall into regions that show signs of selection pressure also on the sequence level (i.e., conserved elements). More than 700 predictions overlap with noncoding transcripts detected by oligonucleotide tiling arrays. One hundred seventy-five selected candidates were tested by RT-PCR in six tissues, and expression could be verified in 43 cases (24.6%).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Descriptors based on Molecular Interaction Fields (MIF) are highly suitable for drug discovery, but their size (thousands of variables) often limits their application in practice. Here we describe a simple and fast computational method that extracts from a MIF a handful of highly informative points (hot spots) which summarize the most relevant information. The method was specifically developed for drug discovery, is fast, and does not require human supervision, being suitable for its application on very large series of compounds. The quality of the results has been tested by running the method on the ligand structure of a large number of ligand-receptor complexes and then comparing the position of the selected hot spots with actual atoms of the receptor. As an additional test, the hot spots obtained with the novel method were used to obtain GRIND-like molecular descriptors which were compared with the original GRIND. In both cases the results show that the novel method is highly suitable for describing ligand-receptor interactions and compares favorably with other state-of-the-art methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past, sensors networks in cities have been limited to fixed sensors, embedded in particular locations, under centralised control. Today, new applications can leverage wireless devices and use them as sensors to create aggregated information. In this paper, we show that the emerging patterns unveiled through the analysis of large sets of aggregated digital footprints can provide novel insights into how people experience the city and into some of the drivers behind these emerging patterns. We particularly explore the capacity to quantify the evolution of the attractiveness of urban space with a case study of in the area of the New York City Waterfalls, a public art project of four man-made waterfalls rising from the New York Harbor. Methods to study the impact of an event of this nature are traditionally based on the collection of static information such as surveys and ticket-based people counts, which allow to generate estimates about visitors’ presence in specific areas over time. In contrast, our contribution makes use of the dynamic data that visitors generate, such as the density and distribution of aggregate phone calls and photos taken in different areas of interest and over time. Our analysis provides novel ways to quantify the impact of a public event on the distribution of visitors and on the evolution of the attractiveness of the points of interest in proximity. This information has potential uses for local authorities, researchers, as well as service providers such as mobile network operators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new graph-based construction of generalized low density codes (GLD-Tanner) with binary BCH constituents is described. The proposed family of GLD codes is optimal on block erasure channels and quasi-optimal on block fading channels. Optimality is considered in the outage probability sense. Aclassical GLD code for ergodic channels (e.g., the AWGN channel,the i.i.d. Rayleigh fading channel, and the i.i.d. binary erasure channel) is built by connecting bitnodes and subcode nodes via a unique random edge permutation. In the proposed construction of full-diversity GLD codes (referred to as root GLD), bitnodes are divided into 4 classes, subcodes are divided into 2 classes, and finally both sides of the Tanner graph are linked via 4 random edge permutations. The study focuses on non-ergodic channels with two states and can be easily extended to channels with 3 states or more.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Tests for recent infections (TRIs) are important for HIV surveillance. We have shown that a patient's antibody pattern in a confirmatory line immunoassay (Inno-Lia) also yields information on time since infection. We have published algorithms which, with a certain sensitivity and specificity, distinguish between incident (< = 12 months) and older infection. In order to use these algorithms like other TRIs, i.e., based on their windows, we now determined their window periods. METHODS: We classified Inno-Lia results of 527 treatment-naïve patients with HIV-1 infection < = 12 months according to incidence by 25 algorithms. The time after which all infections were ruled older, i.e. the algorithm's window, was determined by linear regression of the proportion ruled incident in dependence of time since infection. Window-based incident infection rates (IIR) were determined utilizing the relationship 'Prevalence = Incidence x Duration' in four annual cohorts of HIV-1 notifications. Results were compared to performance-based IIR also derived from Inno-Lia results, but utilizing the relationship 'incident = true incident + false incident' and also to the IIR derived from the BED incidence assay. RESULTS: Window periods varied between 45.8 and 130.1 days and correlated well with the algorithms' diagnostic sensitivity (R(2) = 0.962; P<0.0001). Among the 25 algorithms, the mean window-based IIR among the 748 notifications of 2005/06 was 0.457 compared to 0.453 obtained for performance-based IIR with a model not correcting for selection bias. Evaluation of BED results using a window of 153 days yielded an IIR of 0.669. Window-based IIR and performance-based IIR increased by 22.4% and respectively 30.6% in 2008, while 2009 and 2010 showed a return to baseline for both methods. CONCLUSIONS: IIR estimations by window- and performance-based evaluations of Inno-Lia algorithm results were similar and can be used together to assess IIR changes between annual HIV notification cohorts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. As trees in a given cohort progress through ontogeny, many individuals die. This risk of mortality is unevenly distributed across species because of many processes such as habitat filtering, interspecific competition and negative density dependence. Here, we predict and test the patterns that such ecological processes should inscribe on both species and phylogenetic diversity as plants recruit from saplings to the canopy. 2. We compared species and phylogenetic diversity of sapling and tree communities at two sites in French Guiana. We surveyed 2084 adult trees in four 1-ha tree plots and 943 saplings in sixteen 16-m2 subplots nested within the tree plots. Species diversity was measured using Fisher's alpha (species richness) and Simpson's index (species evenness). Phylogenetic diversity was measured using Faith's phylogenetic diversity (phylogenetic richness) and Rao's quadratic entropy index (phylogenetic evenness). The phylogenetic diversity indices were inferred using four phylogenetic hypotheses: two based on rbcLa plastid DNA sequences obtained from the inventoried individuals with different branch lengths, a global phylogeny available from the Angiosperm Phylogeny Group, and a combination of both. 3. Taxonomic identification of the saplings was performed by combining morphological and DNA barcoding techniques using three plant DNA barcodes (psbA-trnH, rpoC1 and rbcLa). DNA barcoding enabled us to increase species assignment and to assign unidentified saplings to molecular operational taxonomic units. 4. Species richness was similar between saplings and trees, but in about half of our comparisons, species evenness was higher in trees than in saplings. This suggests that negative density dependence plays an important role during the sapling-to-tree transition. 5. Phylogenetic richness increased between saplings and trees in about half of the comparisons. Phylogenetic evenness increased significantly between saplings and trees in a few cases (4 out of 16) and only with the most resolved phylogeny. These results suggest that negative density dependence operates largely independently of the phylogenetic structure of communities. 6. Synthesis. By contrasting species richness and evenness across size classes, we suggest that negative density dependence drives shifts in composition during the sapling-to-tree transition. In addition, we found little evidence for a change in phylogenetic diversity across age classes, suggesting that the observed patterns are not phylogenetically constrained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Population densities of marked common dormice Muscardinus avellanarius are generally based on nest box checks. As dormice also use natural cavities and leaf nests, we tried to answer the question "what proportion of the population cannot be monitored by nest boy checks", using parallel trapping sessions. We selected a forest of 1.7ha where a 5-year nest box survey revealed an annual mean of 3.4 ± 1.4 dormice per check. The trap design (permanent grid of 77 hanging platforms) was developed in June. During July and August the traps were set every second week (4 sessions of two nights = 8 nights) resulting in a total of 75 captures with mean of 9.4 dormice per night and the presence of 16 different individuals. The grid of 60 nest boxes was checked weekly (8 times) which allowed the recapture of 19 dormice with a mean of 2.4 dormice, per control day and the presence of 6 different individuals. Population density estimated by calendar of capture and the minimal number of dormice alive methods gave for nest-box checks a value of 2.4 animals/ha and the trap checks 6.6 animals/ha with the conclusion that 63% of the population were not being monitored by nest box checks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From a managerial point of view, the more effcient, simple, and parameter-free (ESP) an algorithm is, the more likely it will be used in practice for solving real-life problems. Following this principle, an ESP algorithm for solving the Permutation Flowshop Sequencing Problem (PFSP) is proposed in this article. Using an Iterated Local Search (ILS) framework, the so-called ILS-ESP algorithm is able to compete in performance with other well-known ILS-based approaches, which are considered among the most effcient algorithms for the PFSP. However, while other similar approaches still employ several parameters that can affect their performance if not properly chosen, our algorithm does not require any particular fine-tuning process since it uses basic "common sense" rules for the local search, perturbation, and acceptance criterion stages of the ILS metaheuristic. Our approach defines a new operator for the ILS perturbation process, a new acceptance criterion based on extremely simple and transparent rules, and a biased randomization process of the initial solution to randomly generate different alternative initial solutions of similar quality -which is attained by applying a biased randomization to a classical PFSP heuristic. This diversification of the initial solution aims at avoiding poorly designed starting points and, thus, allows the methodology to take advantage of current trends in parallel and distributed computing. A set of extensive tests, based on literature benchmarks, has been carried out in order to validate our algorithm and compare it against other approaches. These tests show that our parameter-free algorithm is able to compete with state-of-the-art metaheuristics for the PFSP. Also, the experiments show that, when using parallel computing, it is possible to improve the top ILS-based metaheuristic by just incorporating to it our biased randomization process with a high-quality pseudo-random number generator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To demonstrate the validity and reliability of volumetric quantitative computed tomography (vQCT) with multi-slice computed tomography (MSCT) and dual energy X-ray absorptiometry (DXA) for hip bone mineral density (BMD) measurements, and to compare the differences between the two techniques in discriminating postmenopausal women with osteoporosis-related vertebral fractures from those without. METHODS: Ninety subjects were enrolled and divided into three groups based on the BMD values of the lumbar spine and/or the femoral neck by DXA. Groups 1 and 2 consisted of postmenopausal women with BMD changes <-2SD, with and without radiographically confirmed vertebral fracture (n=11 and 33, respectively). Group 3 comprised normal controls with BMD changes > or =-1SD (n=46). Post-MSCT (GE, LightSpeed16) scan reconstructed images of the abdominal-pelvic region, 1.25 mm thick per slice, were processed by OsteoCAD software to calculate the following parameters: volumetric BMD values of trabecular bone (TRAB), cortical bone (CORT), and integral bone (INTGL) of the left femoral neck, femoral neck axis length (NAL), and minimum cross-section area (mCSA). DXA BMD measurements of the lumbar spine (AP-SPINE) and the left femoral neck (NECK) also were performed for each subject. RESULTS: The values of all seven parameters were significantly lower in subjects of Groups 1 and 2 than in normal postmenopausal women (P<0.05, respectively). Comparing Groups 1 and 2, 3D-TRAB and 3D-INTGL were significantly lower in postmenopausal women with vertebral fracture(s) [(109.8+/-9.61) and (243.3+/-33.0) mg/cm3, respectively] than in those without [(148.9+/-7.47) and (285.4+/-17.8) mg/cm(3), respectively] (P<0.05, respectively), but no significant differences were evident in AP-SPINE or NECK BMD. CONCLUSION: the femoral neck-derived volumetric BMD parameters using vQCT appeared better than the DXA-derived ones in discriminating osteoporotic postmenopausal women with vertebral fractures from those without. vQCT might be useful to evaluate the effect of osteoporotic vertebral fracture status on changes in bone mass in the femoral neck.