886 resultados para Paths and cycles (Graph theory).
Resumo:
The critical behavior of the stochastic susceptible-infected-recovered model on a square lattice is obtained by numerical simulations and finite-size scaling. The order parameter as well as the distribution in the number of recovered individuals is determined as a function of the infection rate for several values of the system size. The analysis around criticality is obtained by exploring the close relationship between the present model and standard percolation theory. The quantity UP, equal to the ratio U between the second moment and the squared first moment of the size distribution multiplied by the order parameter P, is shown to have, for a square system, a universal value 1.0167(1) that is the same for site and bond percolation, confirming further that the SIR model is also in the percolation class.
Resumo:
We investigate the transport properties (IxV curves and zero bias transmittance) of pristine graphene nanoribbons (GNRs) as well as doped with boron and nitrogen using an approach that combines nonequilibrium Green`s functions and density functional theory (DFT) [NEGF-DFT]. Even for a pristine nanoribbon we verify a spin-filter effect under finite bias voltage when the leads have an antiparallel magnetization. The presence of the impurities at the edges of monohydrogenated zigzag GNRs changes dramatically the charge transport properties inducing a spin-polarized conductance. The IxV curves for these systems show that depending on the bias voltage the spin polarization can be inverted. (C) 2010 Wiley Periodicals, Inc. Int J Quantum Chem 111: 1379-1386, 2011
Resumo:
Texture is one of the most important visual attributes for image analysis. It has been widely used in image analysis and pattern recognition. A partially self-avoiding deterministic walk has recently been proposed as an approach for texture analysis with promising results. This approach uses walkers (called tourists) to exploit the gray scale image contexts in several levels. Here, we present an approach to generate graphs out of the trajectories produced by the tourist walks. The generated graphs embody important characteristics related to tourist transitivity in the image. Computed from these graphs, the statistical position (degree mean) and dispersion (entropy of two vertices with the same degree) measures are used as texture descriptors. A comparison with traditional texture analysis methods is performed to illustrate the high performance of this novel approach. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We discuss potential caveats when estimating topologies of 3D brain networks from surface recordings. It is virtually impossible to record activity from all single neurons in the brain and one has to rely on techniques that measure average activity at sparsely located (non-invasive) recording sites Effects of this spatial sampling in relation to structural network measures like centrality and assortativity were analyzed using multivariate classifiers A simplified model of 3D brain connectivity incorporating both short- and long-range connections served for testing. To mimic M/EEG recordings we sampled this model via non-overlapping regions and weighted nodes and connections according to their proximity to the recording sites We used various complex network models for reference and tried to classify sampled versions of the ""brain-like"" network as one of these archetypes It was found that sampled networks may substantially deviate in topology from the respective original networks for small sample sizes For experimental studies this may imply that surface recordings can yield network structures that might not agree with its generating 3D network. (C) 2010 Elsevier Inc All rights reserved
Resumo:
The assessment of routing protocols for mobile wireless networks is a difficult task, because of the networks` dynamic behavior and the absence of benchmarks. However, some of these networks, such as intermittent wireless sensors networks, periodic or cyclic networks, and some delay tolerant networks (DTNs), have more predictable dynamics, as the temporal variations in the network topology can be considered as deterministic, which may make them easier to study. Recently, a graph theoretic model-the evolving graphs-was proposed to help capture the dynamic behavior of such networks, in view of the construction of least cost routing and other algorithms. The algorithms and insights obtained through this model are theoretically very efficient and intriguing. However, there is no study about the use of such theoretical results into practical situations. Therefore, the objective of our work is to analyze the applicability of the evolving graph theory in the construction of efficient routing protocols in realistic scenarios. In this paper, we use the NS2 network simulator to first implement an evolving graph based routing protocol, and then to use it as a benchmark when comparing the four major ad hoc routing protocols (AODV, DSR, OLSR and DSDV). Interestingly, our experiments show that evolving graphs have the potential to be an effective and powerful tool in the development and analysis of algorithms for dynamic networks, with predictable dynamics at least. In order to make this model widely applicable, however, some practical issues still have to be addressed and incorporated into the model, like adaptive algorithms. We also discuss such issues in this paper, as a result of our experience.
Resumo:
Inspired by the recent work on approximations of classical logic, we present a method that approximates several modal logics in a modular way. Our starting point is the limitation of the n-degree of introspection that is allowed, thus generating modal n-logics. The semantics for n-logics is presented, in which formulas are evaluated with respect to paths, and not possible worlds. A tableau-based proof system is presented, n-SST, and soundness and completeness is shown for the approximation of modal logics K, T, D, S4 and S5. (c) 2008 Published by Elsevier B.V.
Resumo:
The nonadiabatic photochemistry of 6-azauracil has been studied by means of the CASPT2//CASSCF protocol and double-zeta plus polarization ANO basis sets. Minimum energy states, transition states, minimum energy paths, and surface intersections have been computed in order to obtain an accurate description of several potential energy hypersurfaces. It is concluded that, after absorption of ultraviolet radiation (248 nm), two main relaxation mechanisms may occur, via which the lowest (3)(pi pi*) state can be populated. The first one takes place via a conical intersection involving the bright (1)(pi pi*) and the lowest (1)(n pi*) states, ((1)pi pi*/(1)n pi*)(CI), from which a low energy singlet-triplet crossing, ((1)n pi*/(3)pi pi*)(STC), connecting the (1)(n pi*) state to the lowest (3)(pi pi*) triplet state is accessible. The second mechanism arises via a singlet-triplet crossing, ((1)pi pi*/(3)n pi*)(STC), leading to a conical intersection in the triplet manifold, ((3)n pi*/(3)pi pi*)(CI), evolving to the lowest (3)(pi pi*) state. Further radiationless decay to the ground state is possible through a (gs/(3)pi pi*)(STC).
Resumo:
Grammar has always been an important part of language learning. Based on various theories, such as the universal grammar theory (Chomsky, 1959) and, the input theory (Krashen, 1970), the explicit and implicit teaching methods have been developed. Research shows that both methods may have some benefits and disadvantages. The attitude towards English grammar teaching methods in schools has also changed and nowadays grammar teaching methods and learning strategies, as a part of language mastery, are one of the discussion topics among linguists. This study focuses on teacher and learner experiences and beliefs about teaching English grammar and difficulties learners may face. The aim of the study is to conduct a literature review and to find out what scientific knowledge exists concerning the previously named topics. Along with this, the relevant steering documents are investigated focusing on grammar teaching at Swedish upper secondary schools. The universal grammar theory of Chomsky as well as Krashen’s input hypotheses provide the theoretical background for the current study. The study has been conducted applying qualitative and quantitative methods. The systematic search in four databases LIBRIS, ERIK, LLBA and Google Scholar were used for collecting relevant publications. The result shows that scientists’ publications name different grammar areas that are perceived as problematic for learners all over the world. The most common explanation of these difficulties is the influence of learner L1. Research presents teachers’ and learners’ beliefs to the benefits of grammar teaching methods. An effective combination of teaching methods needs to be done to fit learners’ expectations and individual needs. Together, they will contribute to the achieving of higher language proficiency levels and, therefore, they can be successfully applied at Swedish upper secondary schools.
Resumo:
This thesis is an investigation on the corporate identity of the firm SSAB from a managerial viewpoint (1), the company communication through press releases (2), and the image of the company as portrayed in news press articles (3). The managerial view of the corporate identity is researched through interviews with a communication manager of SSAB (1), the corporate communication is researched through press releases from the company (2) and the image is researched in news press articles (3). The results have been deducted using content analysis. The three dimensions are compared in order to see if the topics are coherent. This work builds on earlier research in corporate identity and image research, stakeholder theory, corporate communication and media reputation theory. This is interesting to research as the image of the company framed by the media affects, among other things, the possibility for the company to attract new talent and employees. If there are different stories, or topics, told in the three dimensions then the future employees may not share the view of the company with the managers in it. The analysis show that there is a discrepancy between the topics on the three dimensions, both between the corporate identity and the communication through press releases, as well as between the communication through press releases and the image in news press articles.
Resumo:
While the simulation of flood risks originating from the overtopping of river banks is well covered within continuously evaluated programs to improve flood protection measures, flash flooding is not. Flash floods are triggered by short, local thunderstorm cells with high precipitation intensities. Small catchments have short response times and flow paths and convective thunder cells may result in potential flooding of endangered settlements. Assessing local flooding and pathways of flood requires a detailed hydraulic simulation of the surface runoff. Hydrological models usually do not incorporate surface runoff at this detailedness but rather empirical equations are applied for runoff detention. In return 2D hydrodynamic models usually do not allow distributed rainfall as input nor are any types of soil/surface interaction implemented as in hydrological models. Considering several cases of local flash flooding during the last years the issue emerged for practical reasons but as well as research topics to closing the model gap between distributed rainfall and distributed runoff formation. Therefore, a 2D hydrodynamic model, depth-averaged flow equations using the finite volume discretization, was extended to accept direct rainfall enabling to simulate the associated runoff formation. The model itself is used as numerical engine, rainfall is introduced via the modification of waterlevels at fixed time intervals. The paper not only deals with the general application of the software, but intends to test the numerical stability and reliability of simulation results. The performed tests are made using different artificial as well as measured rainfall series as input. Key parameters of the simulation such as losses, roughness or time intervals for water level manipulations are tested regarding their impact on the stability.
Resumo:
The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.
Resumo:
We report results on the optimal \choice of technique" in a model originally formulated by Robinson, Solow and Srinivasan (henceforth, the RSS model) and further discussed by Okishio and Stiglitz. By viewing this vintage-capital model without discounting as a speci c instance of the general theory of intertemporal resource allocation associated with Brock, Gale and McKenzie, we resolve longstanding conjectures in the form of theorems on the existence and price support of optimal paths, and of conditions suÆcient for the optimality of a policy rst identi ed by Stiglitz. We dispose of the necessity of these conditions in surprisingly simple examples of economies in which (i) an optimal path is periodic, (ii) a path following Stiglitz' policy is bad, and (iii) there is optimal investment in di erent vintages at di erent times. (129 words)
Resumo:
Reduced form estimation of multivariate data sets currently takes into account long-run co-movement restrictions by using Vector Error Correction Models (VECM' s). However, short-run co-movement restrictions are completely ignored. This paper proposes a way of taking into account short-and long-run co-movement restrictions in multivariate data sets, leading to efficient estimation of VECM' s. It enables a more precise trend-cycle decomposition of the data which imposes no untested restrictions to recover these two components. The proposed methodology is applied to a multivariate data set containing U.S. per-capita output, consumption and investment Based on the results of a post-sample forecasting comparison between restricted and unrestricted VECM' s, we show that a non-trivial loss of efficiency results whenever short-run co-movement restrictions are ignored. While permanent shocks to consumption still play a very important role in explaining consumption’s variation, it seems that the improved estimates of trends and cycles of output, consumption, and investment show evidence of a more important role for transitory shocks than previously suspected. Furthermore, contrary to previous evidence, it seems that permanent shocks to output play a much more important role in explaining unemployment fluctuations.
Resumo:
The Prospect Theory is one of the basis of Behavioral Finance and models the investor behavior in a different way than von Neumann and Morgenstern Utility Theory. Behavioral characteristics are evaluated for different control groups, validating the violation of Utility Theory Axioms. Naïve Diversification is also verified, utilizing the 1/n heuristic strategy for investment funds allocations. This strategy causes different fixed and equity allocations, compared to the desirable exposure, given the exposure of the subsample that answered a non constrained allocation question. When compared to non specialists, specialists in finance are less risk averse and allocate more of their wealth on equity.