971 resultados para Recall campaigns.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many studies investigating the effect of human social connectivity structures (networks) and human behavioral adaptations on the spread of infectious diseases have assumed either a static connectivity structure or a network which adapts itself in response to the epidemic (adaptive networks). However, human social connections are inherently dynamic or time varying. Furthermore, the spread of many infectious diseases occur on a time scale comparable to the time scale of the evolving network structure. Here we aim to quantify the effect of human behavioral adaptations on the spread of asymptomatic infectious diseases on time varying networks. We perform a full stochastic analysis using a continuous time Markov chain approach for calculating the outbreak probability, mean epidemic duration, epidemic reemergence probability, etc. Additionally, we use mean-field theory for calculating epidemic thresholds. Theoretical predictions are verified using extensive simulations. Our studies have uncovered the existence of an ``adaptive threshold,'' i.e., when the ratio of susceptibility (or infectivity) rate to recovery rate is below the threshold value, adaptive behavior can prevent the epidemic. However, if it is above the threshold, no amount of behavioral adaptations can prevent the epidemic. Our analyses suggest that the interaction patterns of the infected population play a major role in sustaining the epidemic. Our results have implications on epidemic containment policies, as awareness campaigns and human behavioral responses can be effective only if the interaction levels of the infected populace are kept in check.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A necessary step for the recognition of scanned documents is binarization, which is essentially the segmentation of the document. In order to binarize a scanned document, we can find several algorithms in the literature. What is the best binarization result for a given document image? To answer this question, a user needs to check different binarization algorithms for suitability, since different algorithms may work better for different type of documents. Manually choosing the best from a set of binarized documents is time consuming. To automate the selection of the best segmented document, either we need to use ground-truth of the document or propose an evaluation metric. If ground-truth is available, then precision and recall can be used to choose the best binarized document. What is the case, when ground-truth is not available? Can we come up with a metric which evaluates these binarized documents? Hence, we propose a metric to evaluate binarized document images using eigen value decomposition. We have evaluated this measure on DIBCO and H-DIBCO datasets. The proposed method chooses the best binarized document that is close to the ground-truth of the document.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the process of service provisioning, providing required service to the user without user intervention, with reduction of the cognitive over loading is a real challenge. In this paper we propose a user centred context aware collaborative service provisioning system, which make use of context along with collaboration to provide the required service to the user dynamically. The system uses a novel approach of query expansion along with interactive and rating matrix based collaboration. Performance of the system is evaluated in Mobile-Commerce environment. The results show that the system is time efficient and perform with better precision and recall in comparison with context aware system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we analyse three commonly discussed `flaws' of linearized elasticity theory and attempt to resolve them. The first `flaw' concerns cylindrically orthotropic material models. Since the work of Lekhnitskii (1968), there has been a growing body of work that continues to this day, that shows that infinite stresses arise with the use of a cylindrically orthotropic material model even in the case of linearized elasticity. Besides infinite stresses, interpenetration of matter is also shown to occur. These infinite stresses and interpenetration occur when the ratio of the circumferential Young modulus to the radial Young modulus is less than one. If the ratio is greater than one, then the stresses at the center of a spinning disk are found to be zero (recall that for an isotropic material model, the stresses are maximum at the center). Thus, the stresses go abruptly from a maximum value to a value of zero as the ratio is increased to a value even slightly above one! One of the explanations provided for this extremely anomalous behaviour is the failure of linearized elasticity to satisfy material frame-indifference. However, if this is the true cause, then the anomalous behaviour should also occur with the use of an isotropic material model, where, no such anomalies are observed. We show that the real cause of the problem is elsewhere and also show how these anomalies can be resolved. We also discuss how the formulation of linearized elastodynamics in the case of small deformations superposed on a rigid motion can be given in a succinct manner. Finally, we show how the long-standing problem of devising three compatibility relations instead of six can be resolved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the task of mapping a given textual domain model (e.g., an industry-standard reference model) for a given domain (e.g., ERP), with the source code of an independently developed application in the same domain. This has applications in improving the understandability of an existing application, migrating it to a more flexible architecture, or integrating it with other related applications. We use the vector-space model to abstractly represent domain model elements as well as source-code artifacts. The key novelty in our approach is to leverage the relationships between source-code artifacts in a principled way to improve the mapping process. We describe experiments wherein we apply our approach to the task of matching two real, open-source applications to corresponding industry-standard domain models. We demonstrate the overall usefulness of our approach, as well as the role of our propagation techniques in improving the precision and recall of the mapping task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: The number of genome-wide association studies (GWAS) has increased rapidly in the past couple of years, resulting in the identification of genes associated with different diseases. The next step in translating these findings into biomedically useful information is to find out the mechanism of the action of these genes. However, GWAS studies often implicate genes whose functions are currently unknown; for example, MYEOV, ANKLE1, TMEM45B and ORAOV1 are found to be associated with breast cancer, but their molecular function is unknown. Results: We carried out Bayesian inference of Gene Ontology (GO) term annotations of genes by employing the directed acyclic graph structure of GO and the network of protein-protein interactions (PPIs). The approach is designed based on the fact that two proteins that interact biophysically would be in physical proximity of each other, would possess complementary molecular function, and play role in related biological processes. Predicted GO terms were ranked according to their relative association scores and the approach was evaluated quantitatively by plotting the precision versus recall values and F-scores (the harmonic mean of precision and recall) versus varying thresholds. Precisions of similar to 58% and similar to 40% for localization and functions respectively of proteins were determined at a threshold of similar to 30 (top 30 GO terms in the ranked list). Comparison with function prediction based on semantic similarity among nodes in an ontology and incorporation of those similarities in a k nearest neighbor classifier confirmed that our results compared favorably. Conclusions: This approach was applied to predict the cellular component and molecular function GO terms of all human proteins that have interacting partners possessing at least one known GO annotation. The list of predictions is available at http://severus.dbmi.pitt.edu/engo/GOPRED.html. We present the algorithm, evaluations and the results of the computational predictions, especially for genes identified in GWAS studies to be associated with diseases, which are of translational interest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We model the spread of information in a homogeneously mixed population using the Maki Thompson rumor model. We formulate an optimal control problem, from the perspective of single campaigner, to maximize the spread of information when the campaign budget is fixed. Control signals, such as advertising in the mass media, attempt to convert ignorants and stiflers into spreaders. We show the existence of a solution to the optimal control problem when the campaigning incurs non-linear costs under the isoperimetric budget constraint. The solution employs Pontryagin's Minimum Principle and a modified version of forward backward sweep technique for numerical computation to accommodate the isoperimetric budget constraint. The techniques developed in this paper are general and can be applied to similar optimal control problems in other areas. We have allowed the spreading rate of the information epidemic to vary over the campaign duration to model practical situations when the interest level of the population in the subject of the campaign changes with time. The shape of the optimal control signal is studied for different model parameters and spreading rate profiles. We have also studied the variation of the optimal campaigning costs with respect to various model parameters. Results indicate that, for some model parameters, significant improvements can be achieved by the optimal strategy compared to the static control strategy. The static strategy respects the same budget constraint as the optimal strategy and has a constant value throughout the campaign horizon. This work finds application in election and social awareness campaigns, product advertising, movie promotion and crowdfunding campaigns. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Female mate choice decisions, which influence sexual selection, involve complex interactions between the 2 sexes and the environment. Theoretical models predict that male movement and spacing in the field should influence female sampling tactics, and in turn, females should drive the evolution of male movement and spacing to sample them optimally. Theoretically, simultaneous sampling of males using the best-of-n or comparative Bayes strategy should yield maximum mating benefits to females. We examined the ecological context of female mate sampling based on acoustic signals in the tree cricket Oecanthus henryi to determine whether the conditions for such optimal strategies were met in the field. These strategies involve recall of the quality and location of individual males, which in turn requires male positions to be stable within a night. Calling males rarely moved within a night, potentially enabling female sampling strategies that require recall. To examine the possibility of simultaneous acoustic sampling of males, we estimated male acoustic active spaces using information on male spacing, call transmission, and female hearing threshold. Males were found to be spaced far apart, and active space overlap was rare. We then examined female sampling scenarios by studying female spacing relative to male acoustic active spaces. Only 15% of sampled females could hear multiple males, suggesting that simultaneous mate sampling is rare in the field. Moreover, the relatively large distances between calling males suggest high search costs, which may favor threshold strategies that do not require memory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inference of molecular function of proteins is the fundamental task in the quest for understanding cellular processes. The task is getting increasingly difficult with thousands of new proteins discovered each day. The difficulty arises primarily due to lack of high-throughput experimental technique for assessing protein molecular function, a lacunae that computational approaches are trying hard to fill. The latter too faces a major bottleneck in absence of clear evidence based on evolutionary information. Here we propose a de novo approach to annotate protein molecular function through structural dynamics match for a pair of segments from two dissimilar proteins, which may share even <10% sequence identity. To screen these matches, corresponding 1 mu s coarse-grained (CG) molecular dynamics trajectories were used to compute normalized root-mean-square-fluctuation graphs and select mobile segments, which were, thereafter, matched for all pairs using unweighted three-dimensional autocorrelation vectors. Our in-house custom-built forcefield (FF), extensively validated against dynamics information obtained from experimental nuclear magnetic resonance data, was specifically used to generate the CG dynamics trajectories. The test for correspondence of dynamics-signature of protein segments and function revealed 87% true positive rate and 93.5% true negative rate, on a dataset of 60 experimentally validated proteins, including moonlighting proteins and those with novel functional motifs. A random test against 315 unique fold/function proteins for a negative test gave >99% true recall. A blind prediction on a novel protein appears consistent with additional evidences retrieved therein. This is the first proof-of-principle of generalized use of structural dynamics for inferring protein molecular function leveraging our custom-made CG FF, useful to all. (C) 2014 Wiley Periodicals, Inc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Today's programming languages are supported by powerful third-party APIs. For a given application domain, it is common to have many competing APIs that provide similar functionality. Programmer productivity therefore depends heavily on the programmer's ability to discover suitable APIs both during an initial coding phase, as well as during software maintenance. The aim of this work is to support the discovery and migration of math APIs. Math APIs are at the heart of many application domains ranging from machine learning to scientific computations. Our approach, called MATHFINDER, combines executable specifications of mathematical computations with unit tests (operational specifications) of API methods. Given a math expression, MATHFINDER synthesizes pseudo-code comprised of API methods to compute the expression by mining unit tests of the API methods. We present a sequential version of our unit test mining algorithm and also design a more scalable data-parallel version. We perform extensive evaluation of MATHFINDER (1) for API discovery, where math algorithms are to be implemented from scratch and (2) for API migration, where client programs utilizing a math API are to be migrated to another API. We evaluated the precision and recall of MATHFINDER on a diverse collection of math expressions, culled from algorithms used in a wide range of application areas such as control systems and structural dynamics. In a user study to evaluate the productivity gains obtained by using MATHFINDER for API discovery, the programmers who used MATHFINDER finished their programming tasks twice as fast as their counterparts who used the usual techniques like web and code search, IDE code completion, and manual inspection of library documentation. For the problem of API migration, as a case study, we used MATHFINDER to migrate Weka, a popular machine learning library. Overall, our evaluation shows that MATHFINDER is easy to use, provides highly precise results across several math APIs and application domains even with a small number of unit tests per method, and scales to large collections of unit tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cis-peptide embedded segments are rare in proteins but often highlight their important role in molecular function when they do occur. The high evolutionary conservation of these segments illustrates this observation almost universally, although no attempt has been made to systematically use this information for the purpose of function annotation. In the present study, we demonstrate how geometric clustering and level-specific Gene Ontology molecular-function terms (also known as annotations) can be used in a statistically significant manner to identify cis-embedded segments in a protein linked to its molecular function. The present study identifies novel cis-peptide fragments, which are subsequently used for fragment-based function annotation. Annotation recall benchmarks interpreted using the receiver-operator characteristic plot returned an area-under-curve >0.9, corroborating the utility of the annotation method. In addition, we identified cis-peptide fragments occurring in conjunction with functionally important trans-peptide fragments, providing additional insights into molecular function. We further illustrate the applicability of our method in function annotation where homology-based annotation transfer is not possible. The findings of the present study add to the repertoire of function annotation approaches and also facilitate engineering, design and allied studies around the cis-peptide neighborhood of proteins.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Standard Susceptible-Infected-Susceptible (SIS) epidemic models assume that a message spreads from the infected to the susceptible nodes due to only susceptible-infected epidemic contact. We modify the standard SIS epidemic model to include direct recruitment of susceptible individuals to the infected class at a constant rate (independent of epidemic contacts), to accelerate information spreading in a social network. Such recruitment can be carried out by placing advertisements in the media. We provide a closed form analytical solution for system evolution in the proposed model and use it to study campaigning in two different scenarios. In the first, the net cost function is a linear combination of the reward due to extent of information diffusion and the cost due to application of control. In the second, the campaign budget is fixed. Results reveal the effectiveness of the proposed system in accelerating and improving the extent of information diffusion. Our work is useful for devising effective strategies for product marketing and political/social-awareness/crowd-funding campaigns that target individuals in a social network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We show here a 2(Omega(root d.log N)) size lower bound for homogeneous depth four arithmetic formulas. That is, we give an explicit family of polynomials of degree d on N variables (with N = d(3) in our case) with 0, 1-coefficients such that for any representation of a polynomial f in this family of the form f = Sigma(i) Pi(j) Q(ij), where the Q(ij)'s are homogeneous polynomials (recall that a polynomial is said to be homogeneous if all its monomials have the same degree), it must hold that Sigma(i,j) (Number of monomials of Q(ij)) >= 2(Omega(root d.log N)). The above mentioned family, which we refer to as the Nisan-Wigderson design-based family of polynomials, is in the complexity class VNP. Our work builds on the recent lower bound results 1], 2], 3], 4], 5] and yields an improved quantitative bound as compared to the quasi-polynomial lower bound of 6] and the N-Omega(log log (N)) lower bound in the independent work of 7].

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aerosol loading over the South Asian region has the potential to affect the monsoon rainfall, Himalayan glaciers and regional air-quality, with implications for the billions in this region. While field campaigns and network observations provide primary data, they tend to be location/season specific. Numerical models are useful to regionalize such location-specific data. Studies have shown that numerical models underestimate the aerosol scenario over the Indian region, mainly due to shortcomings related to meteorology and the emission inventories used. In this context, we have evaluated the performance of two such chemistry-transport models: WRF-Chem and SPRINTARS over an India-centric domain. The models differ in many aspects including physical domain, horizontal resolution, meteorological forcing and so on etc. Despite these differences, both the models simulated similar spatial patterns of Black Carbon (BC) mass concentration, (with a spatial correlation of 0.9 with each other), and a reasonable estimates of its concentration, though both of them under-estimated vis-a-vis the observations. While the emissions are lower (higher) in SPRINTARS (WRF-Chem), overestimation of wind parameters in WRF-Chem caused the concentration to be similar in both models. Additionally, we quantified the under-estimations of anthropogenic BC emissions in the inventories used these two models and three other widely used emission inventories. Our analysis indicates that all these emission inventories underestimate the emissions of BC over India by a factor that ranges from 1.5 to 2.9. We have also studied the model simulations of aerosol optical depth over the Indian region. The models differ significantly in simulations of AOD, with WRF-Chem having a better agreement with satellite observations of AOD as far as the spatial pattern is concerned. It is important to note that in addition to BC, dust can also contribute significantly to AOD. The models differ in simulations of the spatial pattern of mineral dust over the Indian region. We find that both meteorological forcing and emission formulation contribute to these differences. Since AOD is column integrated parameter, description of vertical profiles in both models, especially since elevated aerosol layers are often observed over Indian region, could be also a contributing factor. Additionally, differences in the prescription of the optical properties of BC between the models appear to affect the AOD simulations. We also compared simulation of sea-salt concentration in the two models and found that WRF-Chem underestimated its concentration vis-a-vis SPRINTARS. The differences in near-surface oceanic wind speeds appear to be the main source of this difference. In-spite of these differences, we note that there are similarities in their simulation of spatial patterns of various aerosol species (with each other and with observations) and hence models could be valuable tools for aerosol-related studies over the Indian region. Better estimation of emission inventories could improve aerosol-related simulations. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resumen: Cuando los medios se refieren a los candidatos políticos en una situación de campaña electoral proyectan una cierta imagen de los mismos a partir de las verbalizaciones que preponderan en los mensajes informativos. El presente trabajo analiza estas verbalizaciones de los medios en el marco de la teoría de la Agenda Setting, más específicamente en el segundo nivel de esta teoría, la cual hace referencia a los atributos o aspectos que caracterizan a los protagonistas de las noticias, para este estudio en particular, los políticos. Esta teoría fue puesta a prueba repetidamente desde su aplicación en las elecciones estadounidenses de 1968 de la mano de sus autores Maxwell McCombs y Donald Shaw. La misma se ha extendido con paso firme desde su país de origen hacia otras latitudes. El objetivo general es describir la imagen de los candidatos presidenciales a partir de las expresiones que preponderan en los medios masivos de comunicación, durante la campaña presidencial en Argentina ocurrida en octubre de 2011. El procedimiento consiste en el relevamiento realizado durante los meses anteriores a las elecciones presidenciales. Para ello fue necesario armar un corpus compuesto por la selección de los medios masivos de comunicación a analizar. Seguidamente se realiza el análisis de contenido del corpus y se procede al análisis de los datos, de ello deriva una base de datos, donde la unidad de análisis fue la mención de los diversos aspectos o características de los candidatos políticos. Los aspectos o características fueron tomados de investigaciones anteriores que aplicaron la misma metodología y que fueron realizadas en nuestro país también en situaciones de contextos electorales. Por ello es factible de efectuar comparaciones en el tiempo, ya que una de las enormes riquezas de la teoría de la Agenda Setting es la de ser susceptible de comparación por tratarse de una metodología de análisis sistemático. La revisión de la teoría de la Agenda Setting que enmarca esta investigación pretende introducirnos primeramente en las investigaciones de medios masivos en general para luego focalizar en la teoría propiamente dicha y más aún en el segundo nivel de la teoría que trata de los atributos de los personajes públicos Se realiza posteriormente una breve sinopsis de investigaciones en Latinoamérica y Argentina, sobre temas relacionados con campañas electorales y no electorales para finalmente de manera concatenada dar cuenta de trabajos realizados en nuestro país en situaciones de campañas políticas