28 resultados para Time and space


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The boxicity (cubicity) of a graph G, denoted by box(G) (respectively cub(G)), is the minimum integer k such that G can be represented as the intersection graph of axis parallel boxes (cubes) in ℝ k . The problem of computing boxicity (cubicity) is known to be inapproximable in polynomial time even for graph classes like bipartite, co-bipartite and split graphs, within an O(n 0.5 − ε ) factor for any ε > 0, unless NP = ZPP. We prove that if a graph G on n vertices has a clique on n − k vertices, then box(G) can be computed in time n22O(k2logk) . Using this fact, various FPT approximation algorithms for boxicity are derived. The parameter used is the vertex (or edge) edit distance of the input graph from certain graph families of bounded boxicity - like interval graphs and planar graphs. Using the same fact, we also derive an O(nloglogn√logn√) factor approximation algorithm for computing boxicity, which, to our knowledge, is the first o(n) factor approximation algorithm for the problem. We also present an FPT approximation algorithm for computing the cubicity of graphs, with vertex cover number as the parameter.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dead-time is provided in between the gating signals of the top and bottom semiconductor switches in an inverter leg to prevent the shorting of DC bus. Due to this dead time, there is a significant unwanted change in the output voltage of the inverter. The effect is different for different pulse width modulation (PWM) methodologies. The effect of dead-time on the output fundamental voltage is studied theoretically as well as experimentally for bus-clamping PWM methodologies. Further, experimental observations on the effectiveness of dead-time compensation are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phase-locked loops (PLLs) are necessary in grid connected systems to obtain information about the frequency, amplitude and phase of the grid voltage. In stationary reference frame control, the unit vectors of PLLs are used for reference generation. It is important that the PLL performance is not affected significantly when grid voltage undergoes amplitude and frequency variations. In this paper, a novel design for the popular single-phase PLL topology, namely the second-order generalized integrator (SOGI) based PLL is proposed which achieves minimum settling time during grid voltage amplitude and frequency variations. The proposed design achieves a settling time of less than 27.7 ms. This design also ensures that the unit vectors generated by this PLL have a steady state THD of less than 1% during frequency variations of the grid voltage. The design of the SOGI-PLL based on the theoretical analysis is validated by experimental results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Detailed pedofacies characterization along-with lithofacies investigations of the Mio-Pleistocene Siwalik sediments exposed in the Ramnagar sub-basin have been studied so as to elucidate variability in time and space of fluvial processes and the role of intra- and extra-basinal controls on fluvial sedimentation during the evolution of the Himalayan foreland basin (HFB). Dominance of multiple, moderately to strongly developed palaeosol assemblages during deposition of Lower Siwalik (similar to 12-10.8 Ma) sediments suggest that the HFB was marked by Upland set-up of Thomas et al. (2002). Activity of intra-basinal faults on the uplands and deposition of terminal fans at different times caused the development of multiple soils. Further, detailed pedofacies along-with lithofacies studies indicate prevalence of stable tectonic conditions and development of meandering streams with broad floodplains. However, the Middle Siwalik (similar to 10.8-4.92 Ma) sub-group is marked by multistoried sandstones and minor mudstone and mainly weakly developed palaeosols, indicating deposition by large braided rivers in the form of megafans in a Lowland set-up of Thomas et al. (2002). Significant change in nature and size of rivers from the Lower to Middle Siwalik at similar to 10 Ma is found almost throughout of the basin from Kohat Plateau (Pakistan) to Nepal because the Himalayan orogeny witnessed its greatest tectonic upheaval at this time leading to attainment of great heights by the Himalaya, intensification of the monsoon, development of large rivers systems and a high rate of sedimentation, hereby a major change from the Upland set-up to the Lowland set-up over major parts of the HFB. An interesting geomorphic environmental set-up prevailed in the Ramnagar sub-basin during deposition of the studied Upper Siwalik (similar to 4.92 to <1.68 Ma) sediments as observed from the degree of pedogenesis and the type of palaeosols. In general, the Upper Siwalik sub-group in the Ramnagar sub-basin is subdivided from bottom to top into the Purmandal sandstone (4.92-4.49 Ma), Nagrota (4.49-1.68 Ma) and Boulder Conglomerate (<1.68 Ma) formations on the basis of sedimentological characters and change in dominant lithology. Presence of mudstone, a few thin gravel beds and dominant sandstone lithology with weakly to moderately developed palaeosols in the Purmandal sandstone Fm. indicates deposition by shallow braided fluvial streams. The deposition of mudstone dominant Nagrota Fm. with moderately to some well developed palaeosols and a zone of gleyed palaeosols with laminated mudstones and thin sandstones took place in an environment marked by numerous small lakes, water-logged regions and small streams in an environment just south of the Piedmont zone, perhaps similar to what is happening presently in the Upland region/the Upper Gangetic plain. This area is locally called the `Trai region' (Pascoe, 1964). Deposition of Boulder Conglomerate Fm. took place by gravelly braided river system close to the Himalayan Ranges. Activity along the Main Boundary Fault led to progradation of these environments distal-ward and led to development of on the whole a coarsening upward sequence. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The voltage ripple and power loss in the DC-capacitor of a voltage source inverter depend on the harmonic currents flowing through the capacitor. This paper presents a double Fourier series based analysis of the harmonic contents of the DC capacitor current in a three-level neutral-point clamped (NPC) inverter, modulated with sine-triangle pulse-width modulation (SPWM) or conventional space vector pulse-width modulation (CSVPWM) schemes. The analytical results are validated experimentally on a 3-kVA three-level inverter prototype. The capacitor current in an NPC inverter has a periodicity of 120(a similar to) at the fundamental or modulation frequency. Hence, this current contains third-harmonic and triplen-frequency components, apart from switching frequency components. The harmonic components vary with modulation index and power factor for both PWM schemes. The third harmonic current decreases with increase in modulation index and also decreases with increase in power factor in case of both PWM methods. In general, the third harmonic content is higher with SPWM than with CSVPWM at a given operating condition. Also, power loss and voltage ripple in the DC capacitor are estimated for both the schemes using the current harmonic spectrum and equivalent series resistance (ESR) of the capacitor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since streaming data keeps coming continuously as an ordered sequence, massive amounts of data is created. A big challenge in handling data streams is the limitation of time and space. Prototype selection on streaming data requires the prototypes to be updated in an incremental manner as new data comes in. We propose an incremental algorithm for prototype selection. This algorithm can also be used to handle very large datasets. Results have been presented on a number of large datasets and our method is compared to an existing algorithm for streaming data. Our algorithm saves time and the prototypes selected gives good classification accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The distribution of black leaf nodes at each level of a linear quadtree is of significant interest in the context of estimation of time and space complexities of linear quadtree based algorithms. The maximum number of black nodes of a given level that can be fitted in a square grid of size 2n × 2n can readily be estimated from the ratio of areas. We show that the actual value of the maximum number of nodes of a level is much less than the maximum obtained from the ratio of the areas. This is due to the fact that the number of nodes possible at a level k, 0≤k≤n − 1, should consider the sum of areas occupied by the actual number of nodes present at levels k + 1, k + 2, …, n − 1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Let G - (V, E) be a weighted undirected graph having nonnegative edge weights. An estimate (delta) over cap (u, v) of the actual distance d( u, v) between u, v is an element of V is said to be of stretch t if and only if delta(u, v) <= (delta) over cap (u, v) <= t . delta(u, v). Computing all-pairs small stretch distances efficiently ( both in terms of time and space) is a well-studied problem in graph algorithms. We present a simple, novel, and generic scheme for all-pairs approximate shortest paths. Using this scheme and some new ideas and tools, we design faster algorithms for all-pairs t-stretch distances for a whole range of stretch t, and we also answer an open question posed by Thorup and Zwick in their seminal paper [J. ACM, 52 (2005), pp. 1-24].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Applications in various domains often lead to very large and frequently high-dimensional data. Successful algorithms must avoid the curse of dimensionality but at the same time should be computationally efficient. Finding useful patterns in large datasets has attracted considerable interest recently. The primary goal of the paper is to implement an efficient Hybrid Tree based clustering method based on CF-Tree and KD-Tree, and combine the clustering methods with KNN-Classification. The implementation of the algorithm involves many issues like good accuracy, less space and less time. We will evaluate the time and space efficiency, data input order sensitivity, and clustering quality through several experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Frequent episode discovery is a popular framework for mining data available as a long sequence of events. An episode is essentially a short ordered sequence of event types and the frequency of an episode is some suitable measure of how often the episode occurs in the data sequence. Recently,we proposed a new frequency measure for episodes based on the notion of non-overlapped occurrences of episodes in the event sequence, and showed that, such a definition, in addition to yielding computationally efficient algorithms, has some important theoretical properties in connecting frequent episode discovery with HMM learning. This paper presents some new algorithms for frequent episode discovery under this non-overlapped occurrences-based frequency definition. The algorithms presented here are better (by a factor of N, where N denotes the size of episodes being discovered) in terms of both time and space complexities when compared to existing methods for frequent episode discovery. We show through some simulation experiments, that our algorithms are very efficient. The new algorithms presented here have arguably the least possible orders of spaceand time complexities for the task of frequent episode discovery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In wireless sensor networks (WSNs) the communication traffic is often time and space correlated, where multiple nodes in a proximity start transmitting at the same time. Such a situation is known as spatially correlated contention. The random access methods to resolve such contention suffers from high collision rate, whereas the traditional distributed TDMA scheduling techniques primarily try to improve the network capacity by reducing the schedule length. Usually, the situation of spatially correlated contention persists only for a short duration and therefore generating an optimal or sub-optimal schedule is not very useful. On the other hand, if the algorithm takes very large time to schedule, it will not only introduce additional delay in the data transfer but also consume more energy. To efficiently handle the spatially correlated contention in WSNs, we present a distributed TDMA slot scheduling algorithm, called DTSS algorithm. The DTSS algorithm is designed with the primary objective of reducing the time required to perform scheduling, while restricting the schedule length to maximum degree of interference graph. The algorithm uses randomized TDMA channel access as the mechanism to transmit protocol messages, which bounds the message delay and therefore reduces the time required to get a feasible schedule. The DTSS algorithm supports unicast, multicast and broadcast scheduling, simultaneously without any modification in the protocol. The protocol has been simulated using Castalia simulator to evaluate the run time performance. Simulation results show that our protocol is able to considerably reduce the time required to schedule.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In WSNs the communication traffic is often time and space correlated, where multiple nodes in a proximity start transmitting simultaneously. Such a situation is known as spatially correlated contention. The random access method to resolve such contention suffers from high collision rate, whereas the traditional distributed TDMA scheduling techniques primarily try to improve the network capacity by reducing the schedule length. Usually, the situation of spatially correlated contention persists only for a short duration, and therefore generating an optimal or suboptimal schedule is not very useful. Additionally, if an algorithm takes very long time to schedule, it will not only introduce additional delay in the data transfer but also consume more energy. In this paper, we present a distributed TDMA slot scheduling (DTSS) algorithm, which considerably reduces the time required to perform scheduling, while restricting the schedule length to the maximum degree of interference graph. The DTSS algorithm supports unicast, multicast, and broadcast scheduling, simultaneously without any modification in the protocol. We have analyzed the protocol for average case performance and also simulated it using Castalia simulator to evaluate its runtime performance. Both analytical and simulation results show that our protocol is able to considerably reduce the time required for scheduling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Predation risk can strongly constrain how individuals use time and space. Grouping is known to reduce an individual's time investment in costly antipredator behaviours. Whether grouping might similarly provide a spatial release from antipredator behaviour and allow individuals to use risky habitat more and, thus, improve their access to resources is poorly known. We used mosquito larvae, Aedes aegypti, to test the hypothesis that grouping facilitates the use of high-risk habitat. We provided two habitats, one darker, low-risk and one lighter, high-risk, and measured the relative time spent in the latter by solitary larvae versus larvae in small groups. We tested larvae reared under different resource levels, and thus presumed to vary in body condition, because condition is known to influence risk taking. We also varied the degree of contrast in habitat structure. We predicted that individuals in groups should use high-risk habitat more than solitary individuals allowing for influences of body condition and contrast in habitat structure. Grouping strongly influenced the time spent in the high-risk habitat, but, contrary to our expectation, individuals in groups spent less time in the high-risk habitat than solitary individuals. Furthermore, solitary individuals considerably increased the proportion of time spent in the high-risk habitat over time, whereas individuals in groups did not. Both solitary individuals and those in groups showed a small increase over time in their use of riskier locations within each habitat. The differences between solitary individuals and those in groups held across all resource and contrast conditions. Grouping may, thus, carry a poorly understood cost of constraining habitat use. This cost may arise because movement traits important for maintaining group cohesion (a result of strong selection on grouping) can act to exaggerate an individual preference for low-risk habitat. Further research is needed to examine the interplay between grouping, individual movement and habitat use traits in environments heterogeneous in risk and resources. (C) 2015 The Association for the Study of Animal Behaviour. Published by Elsevier Ltd. All rights reserved.