933 resultados para GRAPH SEARCH ALGORITHMS
Resumo:
For many, particularly in the Anglophone world and Western Europe, it may be obvious that Google has a monopoly over online search and advertising and that this is an undesirable state of affairs, due to Google's ability to mediate information flows online. The baffling question may be why governments and regulators are doing little to nothing about this situation, given the increasingly pivotal importance of the internet and free flowing communications in our lives. However, the law concerning monopolies, namely antitrust or competition law, works in what may be seen as a less intuitive way by the general public. Monopolies themselves are not illegal. Conduct that is unlawful, i.e. abuses of that market power, is defined by a complex set of rules and revolves principally around economic harm suffered due to anticompetitive behavior. However the effect of information monopolies over search, such as Google’s, is more than just economic, yet competition law does not address this. Furthermore, Google’s collection and analysis of user data and its portfolio of related services make it difficult for others to compete. Such a situation may also explain why Google’s established search rivals, Bing and Yahoo, have not managed to provide services that are as effective or popular as Google’s own (on this issue see also the texts by Dirk Lewandowski and Astrid Mager in this reader). Users, however, are not entirely powerless. Google's business model rests, at least partially, on them – especially the data collected about them. If they stop using Google, then Google is nothing.
Resumo:
Because of limited sensor and communication ranges, designing efficient mechanisms for cooperative tasks is difficult. In this article, several negotiation schemes for multiple agents performing a cooperative task are presented. The negotiation schemes provide suboptimal solutions, but have attractive features of fast decision-making, and scalability to large number of agents without increasing the complexity of the algorithm. A software agent architecture of the decision-making process is also presented. The effect of the magnitude of information flow during the negotiation process is studied by using different models of the negotiation scheme. The performance of the various negotiation schemes, using different information structures, is studied based on the uncertainty reduction achieved for a specified number of search steps. The negotiation schemes perform comparable to that of optimal strategy in terms of uncertainty reduction and also require very low computational time, similar to 7 per cent to that of optimal strategy. Finally, analysis on computational and communication requirement for the negotiation schemes is carried out.
Resumo:
The estimation of the frequency of a sinusoidal signal is a well researched problem. In this work we propose an initialization scheme to the popular dichotomous search of the periodogram peak algorithm(DSPA) that is used to estimate the frequency of a sinusoid in white gaussian noise. Our initialization is computationally low cost and gives the same performance as the DSPA, while reducing the number of iterations needed for the fine search stage. We show that our algorithm remains stable as we reduce the number of iterations in the fine search stage. We also compare the performance of our modification to a previous modification of the DSPA and show that we enhance the performance of the algorithm with our initialization technique.
Resumo:
A geodesic-based approach using Lamb waves is proposed to locate the acoustic emission (AE) source and damage in an isotropic metallic structure. In the case of the AE (passive) technique, the elastic waves take the shortest path from the source to the sensor array distributed in the structure. The geodesics are computed on the meshed surface of the structure using graph theory based on Dijkstra's algorithm. By propagating the waves in reverse virtually from these sensors along the geodesic path and by locating the first intersection point of these waves, one can get the AE source location. The same approach is extended for detection of damage in a structure. The wave response matrix of the given sensor configuration for the healthy and the damaged structure is obtained experimentally. The healthy and damage response matrix is compared and their difference gives the information about the reflection of waves from the damage. These waves are backpropagated from the sensors and the above method is used to locate the damage by finding the point where intersection of geodesics occurs. In this work, the geodesic approach is shown to be suitable to obtain a practicable source location solution in a more general set-up on any arbitrary surface containing finite discontinuities. Experiments were conducted on aluminum specimens of simple and complex geometry to validate this new method.
Resumo:
Rate-constrained power minimization (PMIN) over a code division multiple-access (CDMA) channel with correlated noise is studied. PMIN is. shown to be an instance of a separable convex optimization problem subject to linear ascending constraints. PMIN is further reduced to a dual problem of sum-rate maximization (RMAX). The results highlight the underlying unity between PMIN, RMAX, and a problem closely related to PMIN but with linear receiver constraints. Subsequently, conceptually simple sequence design algorithms are proposed to explicitly identify an assignment of sequences and powers that solve PMIN. The algorithms yield an upper bound of 2N - 1 on the number of distinct sequences where N is the processing gain. The sequences generated using the proposed algorithms are in general real-valued. If a rate-splitting and multi-dimensional CDMA approach is allowed, the upper bound reduces to N distinct sequences, in which case the sequences can form an orthogonal set and be binary +/- 1-valued.
Resumo:
We study the problem of matching applicants to jobs under one-sided preferences: that is, each applicant ranks a non-empty subset of jobs under an order of preference, possibly involving ties. A matching M is said to be rnore popular than T if the applicants that prefer M to T outnumber those that prefer T to M. A matching is said to be popular if there is no matching more popular than it. Equivalently, a matching M is popular if phi(M,T) >= phi(T, M) for all matchings T, where phi(X, Y) is the number of applicants that prefer X to Y. Previously studied solution concepts based oil the popularity criterion are either not guaranteed to exist for every instance (e.g., popular matchings) or are NP-hard to compute (e.g., least unpopular matchings). This paper addresses this issue by considering mixed matchings. A mixed matching is simply a probability distributions over matchings in the input graph. The function phi that compares two matchings generalizes in a natural manner to mixed matchings by taking expectation. A mixed matching P is popular if phi(P,Q) >= phi(Q,P) for all mixed matchings Q. We show that popular mixed matchings always exist. and we design polynomial time algorithms for finding them. Then we study their efficiency and give tight bounds on the price of anarchy and price of stability of the popular matching problem.
Resumo:
The idea of extracting knowledge in process mining is a descendant of data mining. Both mining disciplines emphasise data flow and relations among elements in the data. Unfortunately, challenges have been encountered when working with the data flow and relations. One of the challenges is that the representation of the data flow between a pair of elements or tasks is insufficiently simplified and formulated, as it considers only a one-to-one data flow relation. In this paper, we discuss how the effectiveness of knowledge representation can be extended in both disciplines. To this end, we introduce a new representation of the data flow and dependency formulation using a flow graph. The flow graph solves the issue of the insufficiency of presenting other relation types, such as many-to-one and one-to-many relations. As an experiment, a new evaluation framework is applied to the Teleclaim process in order to show how this method can provide us with more precise results when compared with other representations.
Resumo:
We are addressing the novel problem of jointly evaluating multiple speech patterns for automatic speech recognition and training. We propose solutions based on both the non-parametric dynamic time warping (DTW) algorithm, and the parametric hidden Markov model (HMM). We show that a hybrid approach is quite effective for the application of noisy speech recognition. We extend the concept to HMM training wherein some patterns may be noisy or distorted. Utilizing the concept of ``virtual pattern'' developed for joint evaluation, we propose selective iterative training of HMMs. Evaluating these algorithms for burst/transient noisy speech and isolated word recognition, significant improvement in recognition accuracy is obtained using the new algorithms over those which do not utilize the joint evaluation strategy.
Resumo:
The accretion disk around a compact object is a nonlinear general relativistic system involving magnetohydrodynamics. Naturally, the question arises whether such a system is chaotic (deterministic) or stochastic (random) which might be related to the associated transport properties whose origin is still not confirmed. Earlier, the black hole system GRS 1915+105 was shown to be low-dimensional chaos in certain temporal classes. However, so far such nonlinear phenomena have not been studied fairly well for neutron stars which are unique for their magnetosphere and kHz quasi-periodic oscillation (QPO). On the other hand, it was argued that the QPO is a result of nonlinear magnetohydrodynamic effects in accretion disks. If a neutron star exhibits chaotic signature, then what is the chaotic/correlation dimension? We analyze RXTE/PCA data of neutron stars Sco X-1 and Cyg X-2, along with the black hole Cyg X-1 and the unknown source Cyg X-3, and show that while Sco X-1 and Cyg X-2 are low dimensional chaotic systems, Cyg X-1 and Cyg X-3 are stochastic sources. Based on our analysis, we argue that Cyg X-3 may be a black hole.
Resumo:
Partitional clustering algorithms, which partition the dataset into a pre-defined number of clusters, can be broadly classified into two types: algorithms which explicitly take the number of clusters as input and algorithms that take the expected size of a cluster as input. In this paper, we propose a variant of the k-means algorithm and prove that it is more efficient than standard k-means algorithms. An important contribution of this paper is the establishment of a relation between the number of clusters and the size of the clusters in a dataset through the analysis of our algorithm. We also demonstrate that the integration of this algorithm as a pre-processing step in classification algorithms reduces their running-time complexity.
Resumo:
Diffuse large B-cell lymphoma (DLBCL) is the most common of the non-Hodgkin lymphomas. As DLBCL is characterized by heterogeneous clinical and biological features, its prognosis varies. To date, the International Prognostic Index has been the strongest predictor of outcome for DLBCL patients. However, no biological characters of the disease are taken into account. Gene expression profiling studies have identified two major cell-of-origin phenotypes in DLBCL with different prognoses, the favourable germinal centre B-cell-like (GCB) and the unfavourable activated B-cell-like (ABC) phenotypes. However, results of the prognostic impact of the immunohistochemically defined GCB and non-GCB distinction are controversial. Furthermore, since the addition of the CD20 antibody rituximab to chemotherapy has been established as the standard treatment of DLBCL, all molecular markers need to be evaluated in the post-rituximab era. In this study, we aimed to evaluate the predictive value of immunohistochemically defined cell-of-origin classification in DLBCL patients. The GCB and non-GCB phenotypes were defined according to the Hans algorithm (CD10, BCL6 and MUM1/IRF4) among 90 immunochemotherapy- and 104 chemotherapy-treated DLBCL patients. In the chemotherapy group, we observed a significant difference in survival between GCB and non-GCB patients, with a good and a poor prognosis, respectively. However, in the rituximab group, no prognostic value of the GCB phenotype was observed. Likewise, among 29 high-risk de novo DLBCL patients receiving high-dose chemotherapy and autologous stem cell transplantation, the survival of non-GCB patients was improved, but no difference in outcome was seen between GCB and non-GCB subgroups. Since the results suggested that the Hans algorithm was not applicable in immunochemotherapy-treated DLBCL patients, we aimed to further focus on algorithms based on ABC markers. We examined the modified activated B-cell-like algorithm based (MUM1/IRF4 and FOXP1), as well as a previously reported Muris algorithm (BCL2, CD10 and MUM1/IRF4) among 88 DLBCL patients uniformly treated with immunochemotherapy. Both algorithms distinguished the unfavourable ABC-like subgroup with a significantly inferior failure-free survival relative to the GCB-like DLBCL patients. Similarly, the results of the individual predictive molecular markers transcription factor FOXP1 and anti-apoptotic protein BCL2 have been inconsistent and should be assessed in immunochemotherapy-treated DLBCL patients. The markers were evaluated in a cohort of 117 patients treated with rituximab and chemotherapy. FOXP1 expression could not distinguish between patients, with favourable and those with poor outcomes. In contrast, BCL2-negative DLBCL patients had significantly superior survival relative to BCL2-positive patients. Our results indicate that the immunohistochemically defined cell-of-origin classification in DLBCL has a prognostic impact in the immunochemotherapy era, when the identifying algorithms are based on ABC-associated markers. We also propose that BCL2 negativity is predictive of a favourable outcome. Further investigational efforts are, however, warranted to identify the molecular features of DLBCL that could enable individualized cancer therapy in routine patient care.
Resumo:
This article analyzes the effect of devising a new failure envelope by the combination of the most commonly used failure criteria for the composite laminates, on the design of composite structures. The failure criteria considered for the study are maximum stress and Tsai-Wu criteria. In addition to these popular phenomenological-based failure criteria, a micromechanics-based failure criterion called failure mechanism-based failure criterion is also considered. The failure envelopes obtained by these failure criteria are superimposed over one another and a new failure envelope is constructed based on the lowest absolute values of the strengths predicted by these failure criteria. Thus, the new failure envelope so obtained is named as most conservative failure envelope. A minimum weight design of composite laminates is performed using genetic algorithms. In addition to this, the effect of stacking sequence on the minimum weight of the laminate is also studied. Results are compared for the different failure envelopes and the conservative design is evaluated, with respect to the designs obtained by using only one failure criteria. The design approach is recommended for structures where composites are the key load-carrying members such as helicopter rotor blades.
Resumo:
An acyclic edge coloring of a graph is a proper edge coloring such that there are no bichromatic cycles. The acyclic chromatic index of a graph is the minimum number k such that there is an acyclic edge coloring using k colors and it is denoted by a′(G). From a result of Burnstein it follows that all subcubic graphs are acyclically edge colorable using five colors. This result is tight since there are 3-regular graphs which require five colors. In this paper we prove that any non-regular connected graph of maximum degree 3 is acyclically edge colorable using at most four colors. This result is tight since all edge maximal non-regular connected graphs of maximum degree 3 require four colors.
Resumo:
The intention of this note is to motivate the researchers to study Hadwiger's conjecture for circular arc graphs. Let η(G) denote the largest clique minor of a graph G, and let χ(G) denote its chromatic number. Hadwiger's conjecture states that η(G)greater-or-equal, slantedχ(G) and is one of the most important and difficult open problems in graph theory. From the point of view of researchers who are sceptical of the validity of the conjecture, it is interesting to study the conjecture for graph classes where η(G) is guaranteed not to grow too fast with respect to χ(G), since such classes of graphs are indeed a reasonable place to look for possible counterexamples. We show that in any circular arc graph G, η(G)less-than-or-equals, slant2χ(G)−1, and there is a family with equality. So, it makes sense to study Hadwiger's conjecture for this family.