48 resultados para Graphs
Resumo:
A simplex-lattice statistical project was employed to study an optimization method for a preservative system in an ophthalmic suspension of dexametasone and polymyxin B. The assay matrix generated 17 formulas which were differentiated by the preservatives and EDTA (disodium ethylene diamine-tetraacetate), being the independent variable: X-1 = chlorhexidine digluconate (0.010 % w/v); X-2 = phenylethanol (0.500 % w/v); X-3 = EDTA (0.100 % w/v). The dependent variable was the Dvalue obtained from the microbial challenge of the formulas and calculated when the microbial killing process was modeled by an exponential function. The analysis of the dependent variable, performed using the software Design Expert/W, originated cubic equations with terms derived from stepwise adjustment method for the challenging microorganisms: Pseudomonas aeruginosa, Burkholderia cepacia, Staphylococcus aureus, Candida albicans and Aspergillus niger. Besides the mathematical expressions, the response surfaces and the contour graphics were obtained for each assay. The contour graphs obtained were overlaid in order to permit the identification of a region containing the most adequate formulas (graphic strategy), having as representatives: X-1 = 0.10 ( 0.001 % w/v); X-2 = 0.80 (0.400 % w/v); X-3 = 0.10 (0.010 % w/v). Additionally, in order to minimize responses (Dvalue), a numerical strategy corresponding to the use of the desirability function was used, which resulted in the following independent variables combinations: X-1 = 0.25 (0.0025 % w/v); X-2 = 0.75 (0.375 % w/v); X-3 = 0. These formulas, derived from the two strategies (graphic and numerical), were submitted to microbial challenge, and the experimental Dvalue obtained was compared to the theoretical Dvalue calculated from the cubic equation. Both Dvalues were similar to all the assays except that related to Staphylococcus aureus. This microorganism, as well as Pseudomonas aeruginosa, presented intense susceptibility to the formulas independently from the preservative and EDTA concentrations. Both formulas derived from graphic and numerical strategies attained the recommended criteria adopted by the official method. It was concluded that the model proposed allowed the optimization of the formulas in their preservation aspect.
Resumo:
A graph clustering algorithm constructs groups of closely related parts and machines separately. After they are matched for the least intercell moves, a refining process runs on the initial cell formation to decrease the number of intercell moves. A simple modification of this main approach can deal with some practical constraints, such as the popular constraint of bounding the maximum number of machines in a cell. Our approach makes a big improvement in the computational time. More importantly, improvement is seen in the number of intercell moves when the computational results were compared with best known solutions from the literature. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Bioelectrical impedance vector analysis (BIVA) is a new method that is used for the routine monitoring of the variation in body fluids and nutritional status with assumptions regarding body composition values. The aim of the present study was to determine bivariate tolerance intervals of the whole-body impedance vector and to describe phase angle (PA) values for healthy term newborns aged 7-28 d. This descriptive cross-sectional study was conducted on healthy term neonates born at a low-risk public maternity. General and anthropometric neonatal data and bioelectrical impedance data (800 mu A-50 kHz) were obtained. Bivariate vector analysis was conducted with the resistance-reactance (RXc) graph method. The BIVA software was used to construct the graphs. The study was conducted on 109 neonates (52.3% females) who were born at term, adequate for gestational age, exclusively breast-fed and aged 13 (SD 3.6) d. We constructed one standard, reference, RXc-score graph and RXc-tolerance ellipses (50, 75 and 95 %) that can be used with any analyser. Mean PA was 3.14 (SD 0.43)degrees (3.12 (SD 0.39)degrees for males and 3.17 (SD 0.48)degrees for females). Considering the overlapping of ellipses of males and females with the general distribution, a graph for newborns aged 7-28 d with the same reference tolerance ellipse was defined for boys and girls. The results differ from those reported in the literature probably, in part, due to the ethnic differences in body composition. BIVA and PA permit an assessment without the need to know body weight and the prediction error of conventional impedance formulas.
Resumo:
Background: Understanding how clinical variables affect stress distribution facilitates optimal prosthesis design and fabrication and may lead to a decrease in mechanical failures as well as improve implant longevity. Purpose: In this study, the many clinical variations present in implant-supported prosthesis were analyzed by 3-D finite element method. Materials and Method: A geometrical model representing the anterior segment of a human mandible treated with 5 implants supporting a framework was created to perform the tests. The variables introduced in the computer model were cantilever length, elastic modulus of cancellous bone, abutment length, implant length, and framework alloy (AgPd or CoCr). The computer was programmed with physical properties of the materials as derived from the literature, and a 100N vertical load was used to simulate the occlusal force. Images with the fringes of stress were obtained and the maximum stress at each site was plotted in graphs for comparison. Results: Stresses clustered at the elements closest to the loading point. Stress increase was found to be proportional to the increase in cantilever length and inversely proportional to the increase in the elastic modulus of cancellous bone. Increasing the abutment length resulted in a decrease of stress on implants and framework. Stress decrease could not be demonstrated with implants longer than 13 mm. A stiffer framework may allow better stress distribution. Conclusion: The relative physical properties of the many materials involved in an implant-supported prosthesis system affect the way stresses are distributed.
Resumo:
Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.
Resumo:
We generalize results in Cruz and de Rezende (1999) [7] by completely describing how the Beth numbers of the boundary of an orientable manifold vary after attaching a handle, when the homology coefficients are in Z, Q, R or Z/pZ with p prime. First we apply this result to the Conley index theory of Lyapunov graphs. Next we consider the Ogasa invariant associated with handle decompositions of manifolds. We make use of the above results in order to obtain upper bounds for the Ogasa invariant of product manifolds. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Automatic summarization of texts is now crucial for several information retrieval tasks owing to the huge amount of information available in digital media, which has increased the demand for simple, language-independent extractive summarization strategies. In this paper, we employ concepts and metrics of complex networks to select sentences for an extractive summary. The graph or network representing one piece of text consists of nodes corresponding to sentences, while edges connect sentences that share common meaningful nouns. Because various metrics could be used, we developed a set of 14 summarizers, generically referred to as CN-Summ, employing network concepts such as node degree, length of shortest paths, d-rings and k-cores. An additional summarizer was created which selects the highest ranked sentences in the 14 systems, as in a voting system. When applied to a corpus of Brazilian Portuguese texts, some CN-Summ versions performed better than summarizers that do not employ deep linguistic knowledge, with results comparable to state-of-the-art summarizers based on expensive linguistic resources. The use of complex networks to represent texts appears therefore as suitable for automatic summarization, consistent with the belief that the metrics of such networks may capture important text features. (c) 2008 Elsevier Inc. All rights reserved.
Resumo:
Complex networks can be understood as graphs whose connectivity properties deviate from those of regular or near-regular graphs, which are understood as being ""simple"". While a great deal of the attention so far dedicated to complex networks has been duly driven by the ""complex"" nature of these structures, in this work we address the identification of their simplicity. The basic idea is to seek for subgraphs whose nodes exhibit similar measurements. This approach paves the way for complementing the characterization of networks, including results suggesting that the protein-protein interaction networks, and to a lesser extent also the Internet, may be getting simpler over time. Copyright (C) EPLA, 2009
Resumo:
Cortical bones, essential for mechanical support and structure in many animals, involve a large number of canals organized in intricate fashion. By using state-of-the art image analysis and computer graphics, the 3D reconstruction of a whole bone (phalange) of a young chicken was obtained and represented in terms of a complex network where each canal was associated to an edge and every confluence of three or more canals yielded a respective node. The representation of the bone canal structure as a complex network has allowed several methods to be applied in order to characterize and analyze the canal system organization and the robustness. First, the distribution of the node degrees (i.e. the number of canals connected to each node) confirmed previous indications that bone canal networks follow a power law, and therefore present some highly connected nodes (hubs). The bone network was also found to be partitioned into communities or modules, i.e. groups of nodes which are more intensely connected to one another than with the rest of the network. We verified that each community exhibited distinct topological properties that are possibly linked with their specific function. In order to better understand the organization of the bone network, its resilience to two types of failures (random attack and cascaded failures) was also quantified comparatively to randomized and regular counterparts. The results indicate that the modular structure improves the robustness of the bone network when compared to a regular network with the same average degree and number of nodes. The effects of disease processes (e. g., osteoporosis) and mutations in genes (e.g., BMP4) that occur at the molecular level can now be investigated at the mesoscopic level by using network based approaches.
Resumo:
Complex networks exist in many areas of science such as biology, neuroscience, engineering, and sociology. The growing development of this area has led to the introduction of several topological and dynamical measurements, which describe and quantify the structure of networks. Such characterization is essential not only for the modeling of real systems but also for the study of dynamic processes that may take place in them. However, it is not easy to use several measurements for the analysis of complex networks, due to the correlation between them and the difficulty of their visualization. To overcome these limitations, we propose an effective and comprehensive approach for the analysis of complex networks, which allows the visualization of several measurements in a few projections that contain the largest data variance and the classification of networks into three levels of detail, vertices, communities, and the global topology. We also demonstrate the efficiency and the universality of the proposed methods in a series of real-world networks in the three levels.
Resumo:
Based on a divide and conquer approach, knowledge about nature has been organized into a set of interrelated facts, allowing a natural representation in terms of graphs: each `chunk` of knowledge corresponds to a node, while relationships between such chunks are expressed as edges. This organization becomes particularly clear in the case of mathematical theorems, with their intense cross-implications and relationships. We have derived a web of mathematical theorems from Wikipedia and, thanks to the powerful concept of entropy, identified its more central and frontier elements. Our results also suggest that the central nodes are the oldest theorems, while the frontier nodes are those recently added to the network. The network communities have also been identified, allowing further insights about the organization of this network, such as its highly modular structure.
Resumo:
There has been great interest in deciding whether a combinatorial structure satisfies some property, or in estimating the value of some numerical function associated with this combinatorial structure, by considering only a randomly chosen substructure of sufficiently large, but constant size. These problems are called property testing and parameter testing, where a property or parameter is said to be testable if it can be estimated accurately in this way. The algorithmic appeal is evident, as, conditional on sampling, this leads to reliable constant-time randomized estimators. Our paper addresses property testing and parameter testing for permutations in a subpermutation perspective; more precisely, we investigate permutation properties and parameters that can be well approximated based on a randomly chosen subpermutation of much smaller size. In this context, we use a theory of convergence of permutation sequences developed by the present authors [C. Hoppen, Y. Kohayakawa, C.G. Moreira, R.M. Sampaio, Limits of permutation sequences through permutation regularity, Manuscript, 2010, 34pp.] to characterize testable permutation parameters along the lines of the work of Borgs et al. [C. Borgs, J. Chayes, L Lovasz, V.T. Sos, B. Szegedy, K. Vesztergombi, Graph limits and parameter testing, in: STOC`06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing, ACM, New York, 2006, pp. 261-270.] in the case of graphs. Moreover, we obtain a permutation result in the direction of a famous result of Alon and Shapira [N. Alon, A. Shapira, A characterization of the (natural) graph properties testable with one-sided error, SIAM J. Comput. 37 (6) (2008) 1703-1727.] stating that every hereditary graph property is testable. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
We consider conditions which allow the embedding of linear hypergraphs of fixed size. In particular, we prove that any k-uniform hypergraph H of positive uniform density contains all linear k-uniform hypergraphs of a given size. More precisely, we show that for all integers l >= k >= 2 and every d > 0 there exists Q > 0 for which the following holds: if His a sufficiently large k-uniform hypergraph with the property that the density of H induced on every vertex subset of size on is at least d, then H contains every linear k-uniform hypergraph F with l vertices. The main ingredient in the proof of this result is a counting lemma for linear hypergraphs, which establishes that the straightforward extension of graph epsilon-regularity to hypergraphs suffices for counting linear hypergraphs. We also consider some related problems. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
For a fixed family F of graphs, an F-packing in a graph G is a set of pairwise vertex-disjoint subgraphs of G, each isomorphic to an element of F. Finding an F-packing that maximizes the number of covered edges is a natural generalization of the maximum matching problem, which is just F = {K(2)}. In this paper we provide new approximation algorithms and hardness results for the K(r)-packing problem where K(r) = {K(2), K(3,) . . . , K(r)}. We show that already for r = 3 the K(r)-packing problem is APX-complete, and, in fact, we show that it remains so even for graphs with maximum degree 4. On the positive side, we give an approximation algorithm with approximation ratio at most 2 for every fixed r. For r = 3, 4, 5 we obtain better approximations. For r = 3 we obtain a simple 3/2-approximation, achieving a known ratio that follows from a more involved algorithm of Halldorsson. For r = 4, we obtain a (3/2 + epsilon)-approximation, and for r = 5 we obtain a (25/14 + epsilon)-approximation. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Using the Luthar-Passi method, we investigate the classical Zassenhaus conjecture for the normalized unit group of the integral group ring of the Suzuki sporadic simple group Suz. As a consequence, for this group we confirm the Kimmerle`s conjecture on prime graphs.