927 resultados para Elementary shortest path with resource constraints


Relevância:

100.00% 100.00%

Publicador:

Resumo:

En aquesta tesi es solucionen problemes de visibilitat i proximitat sobre superfícies triangulades considerant elements generalitzats. Com a elements generalitzats considerem: punts, segments, poligonals i polígons. Les estrategies que proposem utilitzen algoritmes de geometria computacional i hardware gràfic. Comencem tractant els problemes de visibilitat sobre models de terrenys triangulats considerant un conjunt d'elements de visió generalitzats. Es presenten dos mètodes per obtenir, de forma aproximada, mapes de multi-visibilitat. Un mapa de multi-visibilitat és la subdivisió del domini del terreny que codifica la visibilitat d'acord amb diferents criteris. El primer mètode, de difícil implementació, utilitza informació de visibilitat exacte per reconstruir de forma aproximada el mapa de multi-visibilitat. El segon, que va acompanyat de resultats d'implementació, obté informació de visibilitat aproximada per calcular i visualitzar mapes de multi-visibilitat discrets mitjançant hardware gràfic. Com a aplicacions es resolen problemes de multi-visibilitat entre regions i es responen preguntes sobre la multi-visibilitat d'un punt o d'una regió. A continuació tractem els problemes de proximitat sobre superfícies polièdriques triangulades considerant seus generalitzades. Es presenten dos mètodes, amb resultats d'implementació, per calcular distàncies des de seus generalitzades sobre superfícies polièdriques on hi poden haver obstacles generalitzats. El primer mètode calcula, de forma exacte, les distàncies definides pels camins més curts des de les seus als punts del poliedre. El segon mètode calcula, de forma aproximada, distàncies considerant els camins més curts sobre superfícies polièdriques amb pesos. Com a aplicacions, es calculen diagrames de Voronoi d'ordre k, i es resolen, de forma aproximada, alguns problemes de localització de serveis. També es proporciona un estudi teòric sobre la complexitat dels diagrames de Voronoi d'ordre k d'un conjunt de seus generalitzades en un poliedre sense pesos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Improving the quality of teaching is an educational priority in Kenya, as in many developing countries. The present paper considers various aspects of in-service education, including views on the effectiveness of in-service, teacher and headteacher priorities in determining in-service needs and the constraints on providing in-service courses. These issues are examined though an empirical study of 30 secondary headteachers and 109 teachers in a district of Kenya. The results show a strong felt need for in-service provision together with a firm belief in the efficacy of in-service in raising pupil achievement. Headteachers had a stronger belief in the need for in-service for their teachers than did the teachers themselves. The priorities of both headteachers and teachers were dominated by the external pressures of the schools, in particular the pressures for curriculum innovation and examination success. The resource constraints on supporting attendance at in-service courses were the major problems facing headteachers. The results reflect the difficulties that responding to an externally driven in-service agenda creates in a context of scarce resources.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a new variant of the popular n-dimensional hypercube network Q(n), known as the n-dimensional locally twisted cube LTQ(n), which has the same number of nodes and the same number of connections per node as Q(n). Furthermore. LTQ(n) is similar to Q(n) in the sense that the nodes can be one-to-one labeled with 0-1 binary sequences of length n. so that the labels of any two adjacent nodes differ in at most two successive bits. One advantage of LTQ(n) is that the diameter is only about half of the diameter of Q(n) We develop a simple routing algorithm for LTQ(n), which creates a shortest path from the source to the destination in O(n) time. We find that LTQ(n) consists of two disjoint copies of Q(n) by adding a matching between their nodes. On this basis. we show that LTQ(n) has a connectivity of n.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identifying a periodic time-series model from environmental records, without imposing the positivity of the growth rate, does not necessarily respect the time order of the data observations. Consequently, subsequent observations, sampled in the environmental archive, can be inversed on the time axis, resulting in a non-physical signal model. In this paper an optimization technique with linear constraints on the signal model parameters is proposed that prevents time inversions. The activation conditions for this constrained optimization are based upon the physical constraint of the growth rate, namely, that it cannot take values smaller than zero. The actual constraints are defined for polynomials and first-order splines as basis functions for the nonlinear contribution in the distance-time relationship. The method is compared with an existing method that eliminates the time inversions, and its noise sensitivity is tested by means of Monte Carlo simulations. Finally, the usefulness of the method is demonstrated on the measurements of the vessel density, in a mangrove tree, Rhizophora mucronata, and the measurement of Mg/Ca ratios, in a bivalve, Mytilus trossulus.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cities are responsible for up to 70% of global carbon emissions and 75% of global energy consumption. By 2050 it is estimated that 70% of the world's population will live in cities. The critical challenge for contemporary urbanism, therefore, is to understand how to develop the knowledge, capacity and capability for public agencies, the private sector and multiple users in city-regions (i.e. the city and its wider hinterland) to re-engineer systemically their built environment and urban infrastructure in response to climate change and resource constraints. To inform transitions to urban sustainability, key stakeholders' perceptions were sought though a participatory backcasting and scenario foresight process in order to illuminate challenging but realistic socio-technical scenarios for the systemic retrofit of core UK city-regions. The challenge of conceptualizing complex urban transitions is explored across multiple socio-technical ‘regimes’ (housing, non-domestic buildings, urban infrastructure), scales (building, neighbourhood, city-region), and domains (energy, water, use of resources) within a participatory process. The development of three archetypal ‘guiding visions’ of retrofit city-regional futures developed through this process are discussed, along with the contribution that such foresight processes might play in ‘opening up’ the governance and strategic navigation of urban sustainability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background It can be argued that adaptive designs are underused in clinical research. We have explored concerns related to inadequate reporting of such trials, which may influence their uptake. Through a careful examination of the literature, we evaluated the standards of reporting of group sequential (GS) randomised controlled trials, one form of a confirmatory adaptive design. Methods We undertook a systematic review, by searching Ovid MEDLINE from the 1st January 2001 to 23rd September 2014, supplemented with trials from an audit study. We included parallel group, confirmatory, GS trials that were prospectively designed using a Frequentist approach. Eligible trials were examined for compliance in their reporting against the CONSORT 2010 checklist. In addition, as part of our evaluation, we developed a supplementary checklist to explicitly capture group sequential specific reporting aspects, and investigated how these are currently being reported. Results Of the 284 screened trials, 68(24%) were eligible. Most trials were published in “high impact” peer-reviewed journals. Examination of trials established that 46(68%) were stopped early, predominantly either for futility or efficacy. Suboptimal reporting compliance was found in general items relating to: access to full trials protocols; methods to generate randomisation list(s); details of randomisation concealment, and its implementation. Benchmarking against the supplementary checklist, GS aspects were largely inadequately reported. Only 3(7%) trials which stopped early reported use of statistical bias correction. Moreover, 52(76%) trials failed to disclose methods used to minimise the risk of operational bias, due to the knowledge or leakage of interim results. Occurrence of changes to trial methods and outcomes could not be determined in most trials, due to inaccessible protocols and amendments. Discussion and Conclusions There are issues with the reporting of GS trials, particularly those specific to the conduct of interim analyses. Suboptimal reporting of bias correction methods could potentially imply most GS trials stopping early are giving biased results of treatment effects. As a result, research consumers may question credibility of findings to change practice when trials are stopped early. These issues could be alleviated through a CONSORT extension. Assurance of scientific rigour through transparent adequate reporting is paramount to the credibility of findings from adaptive trials. Our systematic literature search was restricted to one database due to resource constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work maps and analyses cross-citations in the areas of Biology, Mathematics, Physics and Medicine in the English version of Wikipedia, which are represented as an undirected complex network where the entries correspond to nodes and the citations among the entries are mapped as edges. We found a high value of clustering coefficient for the areas of Biology and Medicine, and a small value for Mathematics and Physics. The topological organization is also different for each network, including a modular structure for Biology and Medicine, a sparse structure for Mathematics and a dense core for Physics. The networks have degree distributions that can be approximated by a power-law with a cut-off. The assortativity of the isolated networks has also been investigated and the results indicate distinct patterns for each subject. We estimated the betweenness centrality of each node considering the full Wikipedia network, which contains the nodes of the four subjects and the edges between them. In addition, the average shortest path length between the subjects revealed a close relationship between the subjects of Biology and Physics, and also between Medicine and Physics. Our results indicate that the analysis of the full Wikipedia network cannot predict the behavior of the isolated categories since their properties can be very different from those observed in the full network. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Specific choices about how to represent complex networks can have a substantial impact on the execution time required for the respective construction and analysis of those structures. In this work we report a comparison of the effects of representing complex networks statically by adjacency matrices or dynamically by adjacency lists. Three theoretical models of complex networks are considered: two types of Erdos-Renyi as well as the Barabasi-Albert model. We investigated the effect of the different representations with respect to the construction and measurement of several topological properties (i.e. degree, clustering coefficient, shortest path length, and betweenness centrality). We found that different forms of representation generally have a substantial effect on the execution time, with the sparse representation frequently resulting in remarkably superior performance. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relationship between the structure and function of biological networks constitutes a fundamental issue in systems biology. Particularly, the structure of protein-protein interaction networks is related to important biological functions. In this work, we investigated how such a resilience is determined by the large scale features of the respective networks. Four species are taken into account, namely yeast Saccharomyces cerevisiae, worm Caenorhabditis elegans, fly Drosophila melanogaster and Homo sapiens. We adopted two entropy-related measurements (degree entropy and dynamic entropy) in order to quantify the overall degree of robustness of these networks. We verified that while they exhibit similar structural variations under random node removal, they differ significantly when subjected to intentional attacks (hub removal). As a matter of fact, more complex species tended to exhibit more robust networks. More specifically, we quantified how six important measurements of the networks topology (namely clustering coefficient, average degree of neighbors, average shortest path length, diameter, assortativity coefficient, and slope of the power law degree distribution) correlated with the two entropy measurements. Our results revealed that the fraction of hubs and the average neighbor degree contribute significantly for the resilience of networks. In addition, the topological analysis of the removed hubs indicated that the presence of alternative paths between the proteins connected to hubs tend to reinforce resilience. The performed analysis helps to understand how resilience is underlain in networks and can be applied to the development of protein network models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A geodesic in a graph G is a shortest path between two vertices of G. For a specific function e(n) of n, we define an almost geodesic cycle C in G to be a cycle in which for every two vertices u and v in C, the distance d(G)(u, v) is at least d(C)(u, v) - e(n). Let omega(n) be any function tending to infinity with n. We consider a random d-regular graph on n vertices. We show that almost all pairs of vertices belong to an almost geodesic cycle C with e(n)= log(d-1)log(d-1) n+omega(n) and vertical bar C vertical bar =2 log(d-1) n+O(omega(n)). Along the way, we obtain results on near-geodesic paths. We also give the limiting distribution of the number of geodesics between two random vertices in this random graph. (C) 2010 Wiley Periodicals, Inc. J Graph Theory 66: 115-136, 2011

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Drinking water distribution networks risk exposure to malicious or accidental contamination. Several levels of responses are conceivable. One of them consists to install a sensor network to monitor the system on real time. Once a contamination has been detected, this is also important to take appropriate counter-measures. In the SMaRT-OnlineWDN project, this relies on modeling to predict both hydraulics and water quality. An online model use makes identification of the contaminant source and simulation of the contaminated area possible. The objective of this paper is to present SMaRT-OnlineWDN experience and research results for hydraulic state estimation with sampling frequency of few minutes. A least squares problem with bound constraints is formulated to adjust demand class coefficient to best fit the observed values at a given time. The criterion is a Huber function to limit the influence of outliers. A Tikhonov regularization is introduced for consideration of prior information on the parameter vector. Then the Levenberg-Marquardt algorithm is applied that use derivative information for limiting the number of iterations. Confidence intervals for the state prediction are also given. The results are presented and discussed on real networks in France and Germany.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complex behavior of a wide variety of phenomena that are of interest to physicists, chemists, and engineers has been quantitatively characterized by using the ideas of fractal and multifractal distributions, which correspond in a unique way to the geometrical shape and dynamical properties of the systems under study. In this thesis we present the Space of Fractals and the methods of Hausdorff-Besicovitch, box-counting and Scaling to calculate the fractal dimension of a set. In this Thesis we investigate also percolation phenomena in multifractal objects that are built in a simple way. The central object of our analysis is a multifractal object that we call Qmf . In these objects the multifractality comes directly from the geometric tiling. We identify some differences between percolation in the proposed multifractals and in a regular lattice. There are basically two sources of these differences. The first is related to the coordination number, c, which changes along the multifractal. The second comes from the way the weight of each cell in the multifractal affects the percolation cluster. We use many samples of finite size lattices and draw the histogram of percolating lattices against site occupation probability p. Depending on a parameter, ρ, characterizing the multifractal and the lattice size, L, the histogram can have two peaks. We observe that the probability of occupation at the percolation threshold, pc, for the multifractal is lower than that for the square lattice. We compute the fractal dimension of the percolating cluster and the critical exponent β. Despite the topological differences, we find that the percolation in a multifractal support is in the same universality class as standard percolation. The area and the number of neighbors of the blocks of Qmf show a non-trivial behavior. A general view of the object Qmf shows an anisotropy. The value of pc is a function of ρ which is related to its anisotropy. We investigate the relation between pc and the average number of neighbors of the blocks as well as the anisotropy of Qmf. In this Thesis we study likewise the distribution of shortest paths in percolation systems at the percolation threshold in two dimensions (2D). We study paths from one given point to multiple other points. In oil recovery terminology, the given single point can be mapped to an injection well (injector) and the multiple other points to production wells (producers). In the previously standard case of one injection well and one production well separated by Euclidean distance r, the distribution of shortest paths l, P(l|r), shows a power-law behavior with exponent gl = 2.14 in 2D. Here we analyze the situation of one injector and an array A of producers. Symmetric arrays of producers lead to one peak in the distribution P(l|A), the probability that the shortest path between the injector and any of the producers is l, while the asymmetric configurations lead to several peaks in the distribution. We analyze configurations in which the injector is outside and inside the set of producers. The peak in P(l|A) for the symmetric arrays decays faster than for the standard case. For very long paths all the studied arrays exhibit a power-law behavior with exponent g ∼= gl.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: Children with cerebral palsy (CP) are regularly confronted with physical constraints during locomotion. Because abnormalities in motor control are often related to perceptual deficits, the aim of this study was to find out whether children with CP were able to walk across a road as safely as their non-handicapped peers. Method: Ten children with CP and 10 non-handicapped children aged 4-14 y were asked to cross a simulated road if they felt the situation was safe. Results: With respect to safety and accuracy of crossings, the behaviour of children with CP was comparable with that of non-handicapped children. However, a closer examination of children's individual crossing behaviour showed considerable differences within the CP group. In contrast to children with damage to the left hemisphere, children with damage to the right hemisphere made unsafe decisions and did not compensate for them by increasing walking speed.Conclusion: the differences in unsafe behaviour and in the ability to compensate for it within the group of children with CP might be related to damage to specific regions of the brain that are involved in the processing of spatial or temporal information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One common problem in all basic techniques of knowledge representation is the handling of the trade-off between precision of inferences and resource constraints, such as time and memory. Michalski and Winston (1986) suggested the Censored Production Rule (CPR) as an underlying representation and computational mechanism to enable logic based systems to exhibit variable precision in which certainty varies while specificity stays constant. As an extension of CPR, the Hierarchical Censored Production Rules (HCPRs) system of knowledge representation, proposed by Bharadwaj & Jain (1992), exhibits both variable certainty as well as variable specificity and offers mechanisms for handling the trade-off between the two. An HCPR has the form: Decision If(preconditions) Unless(censor) Generality(general_information) Specificity(specific_information). As an attempt towards evolving a generalized knowledge representation, an Extended Hierarchical Censored Production Rules (EHCPRs) system is suggested in this paper. With the inclusion of new operators, an Extended Hierarchical Censored Production Rule (EHCPR) takes the general form: Concept If (Preconditions) Unless (Exceptions) Generality (General-Concept) Specificity (Specific Concepts) Has_part (default: structural-parts) Has_property (default:characteristic-properties) Has_instance (instances). How semantic networks and frames are represented in terms of an EHCPRs is shown. Multiple inheritance, inheritance with and without cancellation, recognition with partial match, and a few default logic problems are shown to be tackled efficiently in the proposed system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a mathematical model and a methodology to solve the transmission network expansion planning problem with security constraints in full competitive market, assuming that all generation programming plans present in the system operation are known. The methodology let us find an optimal transmission network expansion plan that allows the power system to operate adequately in each one of the generation programming plans specified in the full competitive market case, including a single contingency situation with generation rescheduling using the security (n-1) criterion. In this context, the centralized expansion planning with security constraints and the expansion planning in full competitive market are subsets of the proposal presented in this paper. The model provides a solution using a genetic algorithm designed to efficiently solve the reliable expansion planning in full competitive market. The results obtained for several known systems from the literature show the excellent performance of the proposed methodology.