960 resultados para Dunkl operators


Relevância:

10.00% 10.00%

Publicador:

Resumo:

X. Wang, J. Yang, X. Teng, W. Xia, and R. Jensen. Feature Selection based on Rough Sets and Particle Swarm Optimization. Pattern Recognition Letters, vol. 28, no. 4, pp. 459-471, 2007.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Infantolino, B., Gales, D., Winter, S., Challis, J., The validity of ultrasound estimation of muscle volumes, Journal of applied biomechanics, ISSN 1065-8483, Vol. 23, N?. 3, 2007 , pags. 213-217 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Iantchenko, A.; Sj?strand, J.; Zworski, M., (2002) 'Birkhoff normal forms in semi-classical inverse problems', Mathematical Research Letters 9(3) pp.337-362 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wood, Ian; Geissert, M.; Heck, H.; Hieber, M., (2005) 'The Ornstein-Uhlenbeck semigroup in exterior domains', Archiv der Mathematik 86 pp.554-562 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gough, John, (2004) 'Holevo-Ordering and the Continuous-Time Limit for Open Floquet Dynamics', Letters in Mathematical Physcis 67(3) pp.207-221 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Gohm, Rolf, (2003) 'A probabilistic index for completely positive maps and an application', Journal of Operator Theory 54(2) pp.339-361 RAE2008

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The role of renewable energy in power systems is becoming more significant due to the increasing cost of fossil fuels and climate change concerns. However, the inclusion of Renewable Energy Generators (REG), such as wind power, has created additional problems for power system operators due to the variability and lower predictability of output of most REGs, with the Economic Dispatch (ED) problem being particularly difficult to resolve. In previous papers we had reported on the inclusion of wind power in the ED calculations. The simulation had been performed using a system model with wind power as an intermittent source, and the results of the simulation have been compared to that of the Direct Search Method (DSM) for similar cases. In this paper we report on our continuing investigations into using Genetic Algorithms (GA) for ED for an independent power system with a significant amount of wind energy in its generator portfolio. The results demonstrate, in line with previous reports in the literature, the effectiveness of GA when measured against a benchmark technique such as DSM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wydział Matematyki i Informatyki

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a type system, StaXML, which employs the stacked type syntax to represent essential aspects of the potential roles of XML fragments to the structure of complete XML documents. The simplest application of this system is to enforce well-formedness upon the construction of XML documents without requiring the use of templates or balanced "gap plugging" operators; this allows it to be applied to programs written according to common imperative web scripting idioms, particularly the echoing of unbalanced XML fragments to an output buffer. The system can be extended to verify particular XML applications such as XHTML and identifying individual XML tags constructed from their lexical components. We also present StaXML for PHP, a prototype precompiler for the PHP4 scripting language which infers StaXML types for expressions without assistance from the programmer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anomalies are unusual and significant changes in a network's traffic levels, which can often involve multiple links. Diagnosing anomalies is critical for both network operators and end users. It is a difficult problem because one must extract and interpret anomalous patterns from large amounts of high-dimensional, noisy data. In this paper we propose a general method to diagnose anomalies. This method is based on a separation of the high-dimensional space occupied by a set of network traffic measurements into disjoint subspaces corresponding to normal and anomalous network conditions. We show that this separation can be performed effectively using Principal Component Analysis. Using only simple traffic measurements from links, we study volume anomalies and show that the method can: (1) accurately detect when a volume anomaly is occurring; (2) correctly identify the underlying origin-destination (OD) flow which is the source of the anomaly; and (3) accurately estimate the amount of traffic involved in the anomalous OD flow. We evaluate the method's ability to diagnose (i.e., detect, identify, and quantify) both existing and synthetically injected volume anomalies in real traffic from two backbone networks. Our method consistently diagnoses the largest volume anomalies, and does so with a very low false alarm rate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Detecting and understanding anomalies in IP networks is an open and ill-defined problem. Toward this end, we have recently proposed the subspace method for anomaly diagnosis. In this paper we present the first large-scale exploration of the power of the subspace method when applied to flow traffic. An important aspect of this approach is that it fuses information from flow measurements taken throughout a network. We apply the subspace method to three different types of sampled flow traffic in a large academic network: multivariate timeseries of byte counts, packet counts, and IP-flow counts. We show that each traffic type brings into focus a different set of anomalies via the subspace method. We illustrate and classify the set of anomalies detected. We find that almost all of the anomalies detected represent events of interest to network operators. Furthermore, the anomalies span a remarkably wide spectrum of event types, including denial of service attacks (single-source and distributed), flash crowds, port scanning, downstream traffic engineering, high-rate flows, worm propagation, and network outage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For any q > 1, let MOD_q be a quantum gate that determines if the number of 1's in the input is divisible by q. We show that for any q,t > 1, MOD_q is equivalent to MOD_t (up to constant depth). Based on the case q=2, Moore has shown that quantum analogs of AC^(0), ACC[q], and ACC, denoted QAC^(0)_wf, QACC[2], QACC respectively, define the same class of operators, leaving q > 2 as an open question. Our result resolves this question, implying that QAC^(0)_wf = QACC[q] = QACC for all q. We also prove the first upper bounds for QACC in terms of related language classes. We define classes of languages EQACC, NQACC (both for arbitrary complex amplitudes) and BQACC (for rational number amplitudes) and show that they are all contained in TC^(0). To do this, we show that a TC^(0) circuit can keep track of the amplitudes of the state resulting from the application of a QACC operator using a constant width polynomial size tensor sum. In order to accomplish this, we also show that TC^(0) can perform iterated addition and multiplication in certain field extensions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The last 30 years have seen Fuzzy Logic (FL) emerging as a method either complementing or challenging stochastic methods as the traditional method of modelling uncertainty. But the circumstances under which FL or stochastic methods should be used are shrouded in disagreement, because the areas of application of statistical and FL methods are overlapping with differences in opinion as to when which method should be used. Lacking are practically relevant case studies comparing these two methods. This work compares stochastic and FL methods for the assessment of spare capacity on the example of pharmaceutical high purity water (HPW) utility systems. The goal of this study was to find the most appropriate method modelling uncertainty in industrial scale HPW systems. The results provide evidence which suggests that stochastic methods are superior to the methods of FL in simulating uncertainty in chemical plant utilities including HPW systems in typical cases whereby extreme events, for example peaks in demand, or day-to-day variation rather than average values are of interest. The average production output or other statistical measures may, for instance, be of interest in the assessment of workshops. Furthermore the results indicate that the stochastic model should be used only if found necessary by a deterministic simulation. Consequently, this thesis concludes that either deterministic or stochastic methods should be used to simulate uncertainty in chemical plant utility systems and by extension some process system because extreme events or the modelling of day-to-day variation are important in capacity extension projects. Other reasons supporting the suggestion that stochastic HPW models are preferred to FL HPW models include: 1. The computer code for stochastic models is typically less complex than a FL models, thus reducing code maintenance and validation issues. 2. In many respects FL models are similar to deterministic models. Thus the need for a FL model over a deterministic model is questionable in the case of industrial scale HPW systems as presented here (as well as other similar systems) since the latter requires simpler models. 3. A FL model may be difficult to "sell" to an end-user as its results represent "approximate reasoning" a definition of which is, however, lacking. 4. Stochastic models may be applied with some relatively minor modifications on other systems, whereas FL models may not. For instance, the stochastic HPW system could be used to model municipal drinking water systems, whereas the FL HPW model should or could not be used on such systems. This is because the FL and stochastic model philosophies of a HPW system are fundamentally different. The stochastic model sees schedule and volume uncertainties as random phenomena described by statistical distributions based on either estimated or historical data. The FL model, on the other hand, simulates schedule uncertainties based on estimated operator behaviour e.g. tiredness of the operators and their working schedule. But in a municipal drinking water distribution system the notion of "operator" breaks down. 5. Stochastic methods can account for uncertainties that are difficult to model with FL. The FL HPW system model does not account for dispensed volume uncertainty, as there appears to be no reasonable method to account for it with FL whereas the stochastic model includes volume uncertainty.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.