974 resultados para Convexity in Graphs


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The brain's structural and functional systems, protein-protein interaction, and gene networks are examples of biological systems that share some features of complex networks, such as highly connected nodes, modularity, and small-world topology. Recent studies indicate that some pathologies present topological network alterations relative to norms seen in the general population. Therefore, methods to discriminate the processes that generate the different classes of networks (e. g., normal and disease) might be crucial for the diagnosis, prognosis, and treatment of the disease. It is known that several topological properties of a network (graph) can be described by the distribution of the spectrum of its adjacency matrix. Moreover, large networks generated by the same random process have the same spectrum distribution, allowing us to use it as a "fingerprint". Based on this relationship, we introduce and propose the entropy of a graph spectrum to measure the "uncertainty" of a random graph and the Kullback-Leibler and Jensen-Shannon divergences between graph spectra to compare networks. We also introduce general methods for model selection and network model parameter estimation, as well as a statistical procedure to test the nullity of divergence between two classes of complex networks. Finally, we demonstrate the usefulness of the proposed methods by applying them to (1) protein-protein interaction networks of different species and (2) on networks derived from children diagnosed with Attention Deficit Hyperactivity Disorder (ADHD) and typically developing children. We conclude that scale-free networks best describe all the protein-protein interactions. Also, we show that our proposed measures succeeded in the identification of topological changes in the network while other commonly used measures (number of edges, clustering coefficient, average path length) failed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let G be a graph on n vertices with maximum degree ?. We use the Lovasz local lemma to show the following two results about colourings ? of the edges of the complete graph Kn. If for each vertex v of Kn the colouring ? assigns each colour to at most (n - 2)/(22.4?2) edges emanating from v, then there is a copy of G in Kn which is properly edge-coloured by ?. This improves on a result of Alon, Jiang, Miller, and Pritikin [Random Struct. Algorithms 23(4), 409433, 2003]. On the other hand, if ? assigns each colour to at most n/(51?2) edges of Kn, then there is a copy of G in Kn such that each edge of G receives a different colour from ?. This proves a conjecture of Frieze and Krivelevich [Electron. J. Comb. 15(1), R59, 2008]. Our proofs rely on a framework developed by Lu and Szekely [Electron. J. Comb. 14(1), R63, 2007] for applying the local lemma to random injections. In order to improve the constants in our results we use a version of the local lemma due to Bissacot, Fernandez, Procacci, and Scoppola [preprint, arXiv:0910.1824]. (c) 2011 Wiley Periodicals, Inc. Random Struct. Alg., 40, 425436, 2012

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To observe the behavior of the plotted vectors on the RXc (R - resistance - and Xc - reactance corrected for body height/length) graph through bioelectrical impedance analysis (BIVA) and phase angle (PA) values in stable premature infants, considering the hypothesis that preterm infants present vector behavior on BIVA suggestive of less total body water and soft tissues, compared to reference data for term infants. Methods: Cross-sectional study, including preterm neonates of both genders, in-patients admitted to an intermediate care unit at a tertiary care hospital. Data on delivery, diet and bioelectrical impedance (800 mA, 50 kHz) were collected. The graphs and vector analysis were performed with the BIVA software. Results: A total of 108 preterm infants were studied, separated according to age (< 7 days and >= 7 days). Most of the premature babies were without the normal range (above the 95% tolerance intervals) existing in literature for term newborn infants and there was a tendency to dispersion of the points in the upper right quadrant, RXc plan. The PA was 4.92 degrees (+/- 2.18) for newborns < 7 days and 4.34 degrees (+/- 2.37) for newborns >= 7 days. Conclusion: Premature infants behave similarly in terms of BIVA and most of them have less absolute body water, presenting less fat free mass and fat mass in absolute values, compared to term newborn infants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: In the literature, there are several experimental models that induce scoliosis in rats; however, they make use of drugs or invasive interventions to generate a scoliotic curve. Objectives: To design and apply a non-invasive immobilization model to induce scoliosis in rats. Methods: Four-week old male Wistar rats (85 +/- 3.3 g) were divided into two groups: control (CG) and scoliosis (SG). The animals in the SG were immobilized by two vests (scapular and pelvic) made from polyvinyl chloride (PVC) and externally attached to each other by a retainer that regulated the scoliosis angle for twelve weeks with left convexity. After immobilization, the abdominal, intercostal, paravertebral, and pectoral muscles were collected for chemical and metabolic analyses. Radiographic reports were performed every 30 days over a 16-week period. Results: The model was effective in the induction of scoliosis, even 30 days after immobilization, with a stable angle of 28 +/- 5 degrees. The chemical and metabolic analyses showed a decrease (p<0.05) in the glycogenic reserves and in the relationship between DNA and total protein reserves of all the muscles analyzed in the scoliosis group, being lower (p<0.05) in the convex side. The values for the Homeostatic Model Assessment of Insulin Resistance indicated a resistance condition to insulin (p<0.05) in the scoliosis group (0.66 +/- 0.03), when compared to the control group (0.81 +/- 0.02). Conclusions: The scoliosis curvature remained stable 30 days after immobilization. The chemical and metabolic analyses suggest changes in muscular homeostasis during the induced scoliosis process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Object. The anatomy of the occipital lobe convexity is so intricate and variable that its precise description is not found in the classic anatomy textbooks, and the occipital sulci and gyri are described with different nomenclatures according to different authors. The aim of this study was to investigate and describe the anatomy of the occipital lobe convexity and clarify its nomenclature. Methods. The configurations of sulci and gyri on the lateral surface of the occipital lobe of 20 cerebral hemispheres were examined in order to identify the most characteristic and consistent patterns. Results. The most characteristic and consistent occipital sulci identified in this study were the intraoccipital, transverse occipital, and lateral occipital sulci. The morphology of the transverse occipital sulcus and the intraoccipital sulcus connection was identified as the most important aspect to define the gyral pattern of the occipital lobe convexity. Conclusions. Knowledge of the main features of the occipital sulci and gyri permits the recognition of a basic configuration of the occipital lobe and the identification of its sulcal and gyral variations. (http://thejns.org/doi/abs/10.3171/2012.1.JNS11978)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let k and l be positive integers. With a graph G, we associate the quantity c(k,l)(G), the number of k-colourings of the edge set of G with no monochromatic matching of size l. Consider the function c(k,l) : N --> N given by c(k,l)(n) = max {c(k,l)(G): vertical bar V(G)vertical bar = n}, the maximum of c(k,l)(G) over all graphs G on n vertices. In this paper, we determine c(k,l)(n) and the corresponding extremal graphs for all large n and all fixed values of k and l.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assuming that textbooks give literary expression to cultural and ideological values of a nation or group, we propose the analysis of chemistry textbooks used in Brazilian universities throughout the twentieth century. We analyzed iconographic and textual aspects of 31 textbooks which had significant diffusion in the context of Brazilian universities at that period. As a result of the iconographic analysis, nine categories of images were proposed: (1) laboratory and experimentation, (2) industry and production, (3) graphs and diagrams, (4) illustrations related to daily life, (5) models, (6) illustrations related to the history of science, (7) pictures or diagrams of animal, vegetable or mineral samples, (8) analogies and (9) concepts of physics. The distribution of images among the categories showed a different emphasis in the presentation of chemical content due to a commitment to different conceptions of chemistry over the period. So, we started with chemistry as an experimental science in the early twentieth century, with an emphasis change to the principles of chemistry from the 1950s, culminating in a chemistry of undeniable technological influence. Results showed that reflections not only on the history of science, but on the history of science education, may be useful for the improvement of science education.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we have quantified the consistency of word usage in written texts represented by complex networks, where words were taken as nodes, by measuring the degree of preservation of the node neighborhood. Words were considered highly consistent if the authors used them with the same neighborhood. When ranked according to the consistency of use, the words obeyed a log-normal distribution, in contrast to Zipf's law that applies to the frequency of use. Consistency correlated positively with the familiarity and frequency of use, and negatively with ambiguity and age of acquisition. An inspection of some highly consistent words confirmed that they are used in very limited semantic contexts. A comparison of consistency indices for eight authors indicated that these indices may be employed for author recognition. Indeed, as expected, authors of novels could be distinguished from those who wrote scientific texts. Our analysis demonstrated the suitability of the consistency indices, which can now be applied in other tasks, such as emotion recognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a technique for performing analog design synthesis at circuit level providing feedback to the designer through the exploration of the Pareto frontier. A modified simulated annealing which is able to perform crossover with past anchor points when a local minimum is found which is used as the optimization algorithm on the initial synthesis procedure. After all specifications are met, the algorithm searches for the extreme points of the Pareto frontier in order to obtain a non-exhaustive exploration of the Pareto front. Finally, multi-objective particle swarm optimization is used to spread the results and to find a more accurate frontier. Piecewise linear functions are used as single-objective cost functions to produce a smooth and equal convergence of all measurements to the desired specifications during the composition of the aggregate objective function. To verify the presented technique two circuits were designed, which are: a Miller amplifier with 96 dB Voltage gain, 15.48 MHz unity gain frequency, slew rate of 19.2 V/mu s with a current supply of 385.15 mu A, and a complementary folded cascode with 104.25 dB Voltage gain, 18.15 MHz of unity gain frequency and a slew rate of 13.370 MV/mu s. These circuits were synthesized using a 0.35 mu m technology. The results show that the method provides a fast approach for good solutions using the modified SA and further good Pareto front exploration through its connection to the particle swarm optimization algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cooperation plays an important role in the evolution of species and human societies. The understanding of the emergence and persistence of cooperation in those systems is a fascinating and fundamental question. Many mechanisms were extensively studied and proposed as supporting cooperation. The current work addresses the role of migration for the maintenance of cooperation in structured populations. This problem is investigated in an evolutionary perspective through the prisoner's dilemma game paradigm. It is found that migration and structure play an essential role in the evolution of the cooperative behavior. The possible outcomes of the model are extinction of the entire population, dominance of the cooperative strategy and coexistence between cooperators and defectors. The coexistence phase is obtained in the range of large migration rates. It is also verified the existence of a critical level of structuring beyond that cooperation is always likely. In resume, we conclude that the increase in the number of demes as well as in the migration rate favor the fixation of the cooperative behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Recently, it was realized that the functional connectivity networks estimated from actual brain-imaging technologies (MEG, fMRI and EEG) can be analyzed by means of the graph theory, that is a mathematical representation of a network, which is essentially reduced to nodes and connections between them. Methods We used high-resolution EEG technology to enhance the poor spatial information of the EEG activity on the scalp and it gives a measure of the electrical activity on the cortical surface. Afterwards, we used the Directed Transfer Function (DTF) that is a multivariate spectral measure for the estimation of the directional influences between any given pair of channels in a multivariate dataset. Finally, a graph theoretical approach was used to model the brain networks as graphs. These methods were used to analyze the structure of cortical connectivity during the attempt to move a paralyzed limb in a group (N=5) of spinal cord injured patients and during the movement execution in a group (N=5) of healthy subjects. Results Analysis performed on the cortical networks estimated from the group of normal and SCI patients revealed that both groups present few nodes with a high out-degree value (i.e. outgoing links). This property is valid in the networks estimated for all the frequency bands investigated. In particular, cingulate motor areas (CMAs) ROIs act as ‘‘hubs’’ for the outflow of information in both groups, SCI and healthy. Results also suggest that spinal cord injuries affect the functional architecture of the cortical network sub-serving the volition of motor acts mainly in its local feature property. In particular, a higher local efficiency El can be observed in the SCI patients for three frequency bands, theta (3-6 Hz), alpha (7-12 Hz) and beta (13-29 Hz). By taking into account all the possible pathways between different ROI couples, we were able to separate clearly the network properties of the SCI group from the CTRL group. In particular, we report a sort of compensatory mechanism in the SCI patients for the Theta (3-6 Hz) frequency band, indicating a higher level of “activation” Ω within the cortical network during the motor task. The activation index is directly related to diffusion, a type of dynamics that underlies several biological systems including possible spreading of neuronal activation across several cortical regions. Conclusions The present study aims at demonstrating the possible applications of graph theoretical approaches in the analyses of brain functional connectivity from EEG signals. In particular, the methodological aspects of the i) cortical activity from scalp EEG signals, ii) functional connectivity estimations iii) graph theoretical indexes are emphasized in the present paper to show their impact in a real application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work deals with some classes of linear second order partial differential operators with non-negative characteristic form and underlying non- Euclidean structures. These structures are determined by families of locally Lipschitz-continuous vector fields in RN, generating metric spaces of Carnot- Carath´eodory type. The Carnot-Carath´eodory metric related to a family {Xj}j=1,...,m is the control distance obtained by minimizing the time needed to go from two points along piecewise trajectories of vector fields. We are mainly interested in the causes in which a Sobolev-type inequality holds with respect to the X-gradient, and/or the X-control distance is Doubling with respect to the Lebesgue measure in RN. This study is divided into three parts (each corresponding to a chapter), and the subject of each one is a class of operators that includes the class of the subsequent one. In the first chapter, after recalling “X-ellipticity” and related concepts introduced by Kogoj and Lanconelli in [KL00], we show a Maximum Principle for linear second order differential operators for which we only assume a Sobolev-type inequality together with a lower terms summability. Adding some crucial hypotheses on measure and on vector fields (Doubling property and Poincar´e inequality), we will be able to obtain some Liouville-type results. This chapter is based on the paper [GL03] by Guti´errez and Lanconelli. In the second chapter we treat some ultraparabolic equations on Lie groups. In this case RN is the support of a Lie group, and moreover we require that vector fields satisfy left invariance. After recalling some results of Cinti [Cin07] about this class of operators and associated potential theory, we prove a scalar convexity for mean-value operators of L-subharmonic functions, where L is our differential operator. In the third chapter we prove a necessary and sufficient condition of regularity, for boundary points, for Dirichlet problem on an open subset of RN related to sub-Laplacian. On a Carnot group we give the essential background for this type of operator, and introduce the notion of “quasi-boundedness”. Then we show the strict relationship between this notion, the fundamental solution of the given operator, and the regularity of the boundary points.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compared with other mature engineering disciplines, fracture mechanics of concrete is still a developing field and very important for structures like bridges subject to dynamic loading. An historical point of view of what done in the field is provided and then the project is presented. The project presents an application of the Digital Image Correlation (DIC) technique for the detection of cracks at the surface of concrete prisms (500mmx100mmx100mm) subject to flexural loading conditions (Four Point Bending test). The technique provide displacement measurements of the region of interest and from this displacement field information about crack mouth opening (CMOD) are obtained and related to the applied load. The evolution of the fracture process is shown through graphs and graphical maps of the displacement at some step of the loading process. The study shows that it is possible with the DIC system to detect the appearance and evolution of cracks, even before the cracks become visually detectable.