26 resultados para Convexity in Graphs
Resumo:
A simplex-lattice statistical project was employed to study an optimization method for a preservative system in an ophthalmic suspension of dexametasone and polymyxin B. The assay matrix generated 17 formulas which were differentiated by the preservatives and EDTA (disodium ethylene diamine-tetraacetate), being the independent variable: X-1 = chlorhexidine digluconate (0.010 % w/v); X-2 = phenylethanol (0.500 % w/v); X-3 = EDTA (0.100 % w/v). The dependent variable was the Dvalue obtained from the microbial challenge of the formulas and calculated when the microbial killing process was modeled by an exponential function. The analysis of the dependent variable, performed using the software Design Expert/W, originated cubic equations with terms derived from stepwise adjustment method for the challenging microorganisms: Pseudomonas aeruginosa, Burkholderia cepacia, Staphylococcus aureus, Candida albicans and Aspergillus niger. Besides the mathematical expressions, the response surfaces and the contour graphics were obtained for each assay. The contour graphs obtained were overlaid in order to permit the identification of a region containing the most adequate formulas (graphic strategy), having as representatives: X-1 = 0.10 ( 0.001 % w/v); X-2 = 0.80 (0.400 % w/v); X-3 = 0.10 (0.010 % w/v). Additionally, in order to minimize responses (Dvalue), a numerical strategy corresponding to the use of the desirability function was used, which resulted in the following independent variables combinations: X-1 = 0.25 (0.0025 % w/v); X-2 = 0.75 (0.375 % w/v); X-3 = 0. These formulas, derived from the two strategies (graphic and numerical), were submitted to microbial challenge, and the experimental Dvalue obtained was compared to the theoretical Dvalue calculated from the cubic equation. Both Dvalues were similar to all the assays except that related to Staphylococcus aureus. This microorganism, as well as Pseudomonas aeruginosa, presented intense susceptibility to the formulas independently from the preservative and EDTA concentrations. Both formulas derived from graphic and numerical strategies attained the recommended criteria adopted by the official method. It was concluded that the model proposed allowed the optimization of the formulas in their preservation aspect.
Resumo:
Bioelectrical impedance vector analysis (BIVA) is a new method that is used for the routine monitoring of the variation in body fluids and nutritional status with assumptions regarding body composition values. The aim of the present study was to determine bivariate tolerance intervals of the whole-body impedance vector and to describe phase angle (PA) values for healthy term newborns aged 7-28 d. This descriptive cross-sectional study was conducted on healthy term neonates born at a low-risk public maternity. General and anthropometric neonatal data and bioelectrical impedance data (800 mu A-50 kHz) were obtained. Bivariate vector analysis was conducted with the resistance-reactance (RXc) graph method. The BIVA software was used to construct the graphs. The study was conducted on 109 neonates (52.3% females) who were born at term, adequate for gestational age, exclusively breast-fed and aged 13 (SD 3.6) d. We constructed one standard, reference, RXc-score graph and RXc-tolerance ellipses (50, 75 and 95 %) that can be used with any analyser. Mean PA was 3.14 (SD 0.43)degrees (3.12 (SD 0.39)degrees for males and 3.17 (SD 0.48)degrees for females). Considering the overlapping of ellipses of males and females with the general distribution, a graph for newborns aged 7-28 d with the same reference tolerance ellipse was defined for boys and girls. The results differ from those reported in the literature probably, in part, due to the ethnic differences in body composition. BIVA and PA permit an assessment without the need to know body weight and the prediction error of conventional impedance formulas.
Resumo:
Introduction: The aim of this study was to evaluate the dentoskeletal and soft-tissue effects of Class II malocclusion treatment with the Jasper jumper followed by Class II elastics at the different stages of therapy. Methods: The sample comprised 24 patients of both sexes (11 boys, 13 girls) with an initial age of 12.58 years, treated for a mean period of 2.15 years. Four lateral cephalograms were obtained of each patient in these stages of orthodontic treatment: at pretreatment (T1), after leveling and alignment (T2), after the use of the Jasper jumper appliance and before the use of Class II intermaxillary elastics (T3), and at posttreatment (T4). Thus, 3 treatment phases could be evaluated: leveling and alignment (T1-T2), use of the Jasper jumper (T2-T3), and use of Class II elastics (T3-T4). Dependent analysis of variance (ANOVA) and Tukey tests were used to compare the durations of the 3 treatment phases and for intragroup comparisons of the 4 treatment stages. Results: The alignment phase showed correction of the anteroposterior relationship, protrusion and labial inclination of the maxillary incisors, and reduction of overbite. The Jasper jumper phase demonstrated labial inclination, protrusion and intrusion of the mandibular incisors, mesialization and extrusion of the mandibular molars, reduction of overjet and overbite, molar relationship improvement, and reduction in facial convexity. The Class II elastics phase showed labial inclination of the maxillary incisors; retrusion, uprighting, and extrusion of the mandibular incisors; and overjet and overbite increases. Conclusions: The greatest amount of the Class II malocclusion anteroposterior discrepancy was corrected with the Jasper jumper appliance. Part of the correction was lost during Class II intermaxillary elastics use after use of the Jasper jumper appliance. (Am J Orthod Dentofacial Orthop 2011;140:e77-e84)
Resumo:
Texture is one of the most important visual attributes for image analysis. It has been widely used in image analysis and pattern recognition. A partially self-avoiding deterministic walk has recently been proposed as an approach for texture analysis with promising results. This approach uses walkers (called tourists) to exploit the gray scale image contexts in several levels. Here, we present an approach to generate graphs out of the trajectories produced by the tourist walks. The generated graphs embody important characteristics related to tourist transitivity in the image. Computed from these graphs, the statistical position (degree mean) and dispersion (entropy of two vertices with the same degree) measures are used as texture descriptors. A comparison with traditional texture analysis methods is performed to illustrate the high performance of this novel approach. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Complex networks can be understood as graphs whose connectivity properties deviate from those of regular or near-regular graphs, which are understood as being ""simple"". While a great deal of the attention so far dedicated to complex networks has been duly driven by the ""complex"" nature of these structures, in this work we address the identification of their simplicity. The basic idea is to seek for subgraphs whose nodes exhibit similar measurements. This approach paves the way for complementing the characterization of networks, including results suggesting that the protein-protein interaction networks, and to a lesser extent also the Internet, may be getting simpler over time. Copyright (C) EPLA, 2009
Resumo:
2D electrophoresis is a well-known method for protein separation which is extremely useful in the field of proteomics. Each spot in the image represents a protein accumulation and the goal is to perform a differential analysis between pairs of images to study changes in protein content. It is thus necessary to register two images by finding spot correspondences. Although it may seem a simple task, generally, the manual processing of this kind of images is very cumbersome, especially when strong variations between corresponding sets of spots are expected (e.g. strong non-linear deformations and outliers). In order to solve this problem, this paper proposes a new quadratic assignment formulation together with a correspondence estimation algorithm based on graph matching which takes into account the structural information between the detected spots. Each image is represented by a graph and the task is to find a maximum common subgraph. Successful experimental results using real data are presented, including an extensive comparative performance evaluation with ground-truth data. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The assessment of routing protocols for mobile wireless networks is a difficult task, because of the networks` dynamic behavior and the absence of benchmarks. However, some of these networks, such as intermittent wireless sensors networks, periodic or cyclic networks, and some delay tolerant networks (DTNs), have more predictable dynamics, as the temporal variations in the network topology can be considered as deterministic, which may make them easier to study. Recently, a graph theoretic model-the evolving graphs-was proposed to help capture the dynamic behavior of such networks, in view of the construction of least cost routing and other algorithms. The algorithms and insights obtained through this model are theoretically very efficient and intriguing. However, there is no study about the use of such theoretical results into practical situations. Therefore, the objective of our work is to analyze the applicability of the evolving graph theory in the construction of efficient routing protocols in realistic scenarios. In this paper, we use the NS2 network simulator to first implement an evolving graph based routing protocol, and then to use it as a benchmark when comparing the four major ad hoc routing protocols (AODV, DSR, OLSR and DSDV). Interestingly, our experiments show that evolving graphs have the potential to be an effective and powerful tool in the development and analysis of algorithms for dynamic networks, with predictable dynamics at least. In order to make this model widely applicable, however, some practical issues still have to be addressed and incorporated into the model, like adaptive algorithms. We also discuss such issues in this paper, as a result of our experience.
Resumo:
Let M = (V, E, A) be a mixed graph with vertex set V, edge set E and arc set A. A cycle cover of M is a family C = {C(1), ... , C(k)} of cycles of M such that each edge/arc of M belongs to at least one cycle in C. The weight of C is Sigma(k)(i=1) vertical bar C(i)vertical bar. The minimum cycle cover problem is the following: given a strongly connected mixed graph M without bridges, find a cycle cover of M with weight as small as possible. The Chinese postman problem is: given a strongly connected mixed graph M, find a minimum length closed walk using all edges and arcs of M. These problems are NP-hard. We show that they can be solved in polynomial time if M has bounded tree-width. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In 1983, Chvatal, Trotter and the two senior authors proved that for any Delta there exists a constant B such that, for any n, any 2-colouring of the edges of the complete graph K(N) with N >= Bn vertices yields a monochromatic copy of any graph H that has n vertices and maximum degree Delta. We prove that the complete graph may be replaced by a sparser graph G that has N vertices and O(N(2-1/Delta)log(1/Delta)N) edges, with N = [B`n] for some constant B` that depends only on Delta. Consequently, the so-called size-Ramsey number of any H with n vertices and maximum degree Delta is O(n(2-1/Delta)log(1/Delta)n) Our approach is based on random graphs; in fact, we show that the classical Erdos-Renyi random graph with the numerical parameters above satisfies a stronger partition property with high probability, namely, that any 2-colouring of its edges contains a monochromatic universal graph for the class of graphs on n vertices and maximum degree Delta. The main tool in our proof is the regularity method, adapted to a suitable sparse setting. The novel ingredient developed here is an embedding strategy that allows one to embed bounded degree graphs of linear order in certain pseudorandom graphs. Crucial to our proof is the fact that regularity is typically inherited at a scale that is much finer than the scale at which it is assumed. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Consider the following problem: Forgiven graphs G and F(1),..., F(k), find a coloring of the edges of G with k colors such that G does not contain F; in color i. Rodl and Rucinski studied this problem for the random graph G,,, in the symmetric case when k is fixed and F(1) = ... = F(k) = F. They proved that such a coloring exists asymptotically almost surely (a.a.s.) provided that p <= bn(-beta) for some constants b = b(F,k) and beta = beta(F). This result is essentially best possible because for p >= Bn(-beta), where B = B(F, k) is a large constant, such an edge-coloring does not exist. Kohayakawa and Kreuter conjectured a threshold function n(-beta(F1,..., Fk)) for arbitrary F(1), ..., F(k). In this article we address the case when F(1),..., F(k) are cliques of different sizes and propose an algorithm that a.a.s. finds a valid k-edge-coloring of G(n,p) with p <= bn(-beta) for some constant b = b(F(1),..., F(k)), where beta = beta(F(1),..., F(k)) as conjectured. With a few exceptions, this algorithm also works in the general symmetric case. We also show that there exists a constant B = B(F,,..., Fk) such that for p >= Bn(-beta) the random graph G(n,p) a.a.s. does not have a valid k-edge-coloring provided the so-called KLR-conjecture holds. (C) 2008 Wiley Periodicals, Inc. Random Struct. Alg., 34, 419-453, 2009
Resumo:
The performance of a carbon paste electrode (CPE) modified with SBA-15 nanostructured silica organofunctionalised with 2-benzothiazolethiol in the simultaneous determination of Pb(II), Cu(II) and Hg(II) ions in natural water and sugar cane spirit (cachaca) is described. Pb(II), Cu(II) and Hg(II) were pre-concentrated on the surface of the modified electrode by complexing with 2-benzothiazolethiol and reduced at a negative potential (-0.80 V). Then the reduced products were oxidised by DPASV procedure. The fact that three stripping peaks appeared on the voltammograms at the potentials of -0.48 V (Pb2+), -0.03 V (Cu2+) and +0.36 V (Hg2+) in relation to the SCE, demonstrates the possibility of simultaneous determination of Pb2+, Cu2+ and Hg2+. The best results were obtained under the following optimised conditions: 100 mV pulse amplitude, 3 min accumulation time, 25 mV s(-1) scan rate in phosphate solution pH 3.0. Using such parameters, calibration graphs were linear in the concentration ranges of 3.00-70.0 x 10(-7) mol L-1 (Pb2+), 8.00-100.0 X 10(-7) mol L-1 (Cu2+) and 2.00-10.0 x 10(-6) mol L-1 (Hg2+). Detection limits of 4.0 x 10(-8) mol L-1 (Pb2+), 2.0 x 10(-7) mol L-1 (Cu2+) and 4.0 x 10(-7) mol L-1 (Hg2+) were obtained at the signal noise ratio (SNR) of 3. The results indicate that this electrode is sensitive and effective for simultaneous determination of Pb2+, Cu2+ and Hg2+ in the analysed samples. (C) 2008 Published by Elsevier B.V.