968 resultados para Riemann sum


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Jednym z wyników prognoz wykonywanych z zastosowaniem globalnych i regionalnych modeli klimatycznych jest odkrycie wysokiego prawdopodobieństwa wzrostu częstości oraz natężenia ekstremalnych opadów. Poznanie prawidłowości ich powtarzalności i zasięgu przestrzennego ma oczywiście bardzo duże znaczenie gospodarcze i społeczne. Dlatego, niezależnie od wprowadzania nowych technik pomiarowych należy dokonywać analizy i reinterpretacji archiwalnych danych, korzystając z możliwości stwarzanych przez rozwój GIS. Głównym celem opracowania jest analiza prawidłowości przestrzennej i czasowej zmienności miesięcznych oraz rocznych maksymalnych dobowych sum opadów (MSDO) z lat 1956-1980, z obszaru Polski. W pracy wykorzystano nowe w geografii polskiej metody geostatystyczne. Do publikacji dołączono dysk DVD ze źródłową bazą danych i najważniejszymi wynikami w postaci numerycznej i kartograficznej.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In an n-way broadcast application each one of n overlay nodes wants to push its own distinct large data file to all other n-1 destinations as well as download their respective data files. BitTorrent-like swarming protocols are ideal choices for handling such massive data volume transfers. The original BitTorrent targets one-to-many broadcasts of a single file to a very large number of receivers and thus, by necessity, employs an almost random overlay topology. n-way broadcast applications on the other hand, owing to their inherent n-squared nature, are realizable only in small to medium scale networks. In this paper, we show that we can leverage this scale constraint to construct optimized overlay topologies that take into consideration the end-to-end characteristics of the network and as a consequence deliver far superior performance compared to random and myopic (local) approaches. We present the Max-Min and MaxSum peer-selection policies used by individual nodes to select their neighbors. The first one strives to maximize the available bandwidth to the slowest destination, while the second maximizes the aggregate output rate. We design a swarming protocol suitable for n-way broadcast and operate it on top of overlay graphs formed by nodes that employ Max-Min or Max-Sum policies. Using trace-driven simulation and measurements from a PlanetLab prototype implementation, we demonstrate that the performance of swarming on top of our constructed topologies is far superior to the performance of random and myopic overlays. Moreover, we show how to modify our swarming protocol to allow it to accommodate selfish nodes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For any q > 1, let MOD_q be a quantum gate that determines if the number of 1's in the input is divisible by q. We show that for any q,t > 1, MOD_q is equivalent to MOD_t (up to constant depth). Based on the case q=2, Moore has shown that quantum analogs of AC^(0), ACC[q], and ACC, denoted QAC^(0)_wf, QACC[2], QACC respectively, define the same class of operators, leaving q > 2 as an open question. Our result resolves this question, implying that QAC^(0)_wf = QACC[q] = QACC for all q. We also prove the first upper bounds for QACC in terms of related language classes. We define classes of languages EQACC, NQACC (both for arbitrary complex amplitudes) and BQACC (for rational number amplitudes) and show that they are all contained in TC^(0). To do this, we show that a TC^(0) circuit can keep track of the amplitudes of the state resulting from the application of a QACC operator using a constant width polynomial size tensor sum. In order to accomplish this, we also show that TC^(0) can perform iterated addition and multiplication in certain field extensions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The data streaming model provides an attractive framework for one-pass summarization of massive data sets at a single observation point. However, in an environment where multiple data streams arrive at a set of distributed observation points, sketches must be computed remotely and then must be aggregated through a hierarchy before queries may be conducted. As a result, many sketch-based methods for the single stream case do not apply directly, as either the error introduced becomes large, or because the methods assume that the streams are non-overlapping. These limitations hinder the application of these techniques to practical problems in network traffic monitoring and aggregation in sensor networks. To address this, we develop a general framework for evaluating and enabling robust computation of duplicate-sensitive aggregate functions (e.g., SUM and QUANTILE), over data produced by distributed sources. We instantiate our approach by augmenting the Count-Min and Quantile-Digest sketches to apply in this distributed setting, and analyze their performance. We conclude with experimental evaluation to validate our analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a typical overlay network for routing or content sharing, each node must select a fixed number of immediate overlay neighbors for routing traffic or content queries. A selfish node entering such a network would select neighbors so as to minimize the weighted sum of expected access costs to all its destinations. Previous work on selfish neighbor selection has built intuition with simple models where edges are undirected, access costs are modeled by hop-counts, and nodes have potentially unbounded degrees. However, in practice, important constraints not captured by these models lead to richer games with substantively and fundamentally different outcomes. Our work models neighbor selection as a game involving directed links, constraints on the number of allowed neighbors, and costs reflecting both network latency and node preference. We express a node's "best response" wiring strategy as a k-median problem on asymmetric distance, and use this formulation to obtain pure Nash equilibria. We experimentally examine the properties of such stable wirings on synthetic topologies, as well as on real topologies and maps constructed from PlanetLab and AS-level Internet measurements. Our results indicate that selfish nodes can reap substantial performance benefits when connecting to overlay networks composed of non-selfish nodes. On the other hand, in overlays that are dominated by selfish nodes, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naive wiring strategies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A model of pitch perception, called the Spatial Pitch Network or SPINET model, is developed and analyzed. The model neurally instantiates ideas front the spectral pitch modeling literature and joins them to basic neural network signal processing designs to simulate a broader range of perceptual pitch data than previous spectral models. The components of the model arc interpreted as peripheral mechanical and neural processing stages, which arc capable of being incorporated into a larger network architecture for separating multiple sound sources in the environment. The core of the new model transforms a spectral representation of an acoustic source into a spatial distribution of pitch strengths. The SPINET model uses a weighted "harmonic sieve" whereby the strength of activation of a given pitch depends upon a weighted sum of narrow regions around the harmonics of the nominal pitch value, and higher harmonics contribute less to a pitch than lower ones. Suitably chosen harmonic weighting functions enable computer simulations of pitch perception data involving mistuned components, shifted harmonics, and various types of continuous spectra including rippled noise. It is shown how the weighting functions produce the dominance region, how they lead to octave shifts of pitch in response to ambiguous stimuli, and how they lead to a pitch region in response to the octave-spaced Shepard tone complexes and Deutsch tritones without the use of attentional mechanisms to limit pitch choices. An on-center off-surround network in the model helps to produce noise suppression, partial masking and edge pitch. Finally, it is shown how peripheral filtering and short term energy measurements produce a model pitch estimate that is sensitive to certain component phase relationships.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Choosing the right or the best option is often a demanding and challenging task for the user (e.g., a customer in an online retailer) when there are many available alternatives. In fact, the user rarely knows which offering will provide the highest value. To reduce the complexity of the choice process, automated recommender systems generate personalized recommendations. These recommendations take into account the preferences collected from the user in an explicit (e.g., letting users express their opinion about items) or implicit (e.g., studying some behavioral features) way. Such systems are widespread; research indicates that they increase the customers' satisfaction and lead to higher sales. Preference handling is one of the core issues in the design of every recommender system. This kind of system often aims at guiding users in a personalized way to interesting or useful options in a large space of possible options. Therefore, it is important for them to catch and model the user's preferences as accurately as possible. In this thesis, we develop a comparative preference-based user model to represent the user's preferences in conversational recommender systems. This type of user model allows the recommender system to capture several preference nuances from the user's feedback. We show that, when applied to conversational recommender systems, the comparative preference-based model is able to guide the user towards the best option while the system is interacting with her. We empirically test and validate the suitability and the practical computational aspects of the comparative preference-based user model and the related preference relations by comparing them to a sum of weights-based user model and the related preference relations. Product configuration, scheduling a meeting and the construction of autonomous agents are among several artificial intelligence tasks that involve a process of constrained optimization, that is, optimization of behavior or options subject to given constraints with regards to a set of preferences. When solving a constrained optimization problem, pruning techniques, such as the branch and bound technique, point at directing the search towards the best assignments, thus allowing the bounding functions to prune more branches in the search tree. Several constrained optimization problems may exhibit dominance relations. These dominance relations can be particularly useful in constrained optimization problems as they can instigate new ways (rules) of pruning non optimal solutions. Such pruning methods can achieve dramatic reductions in the search space while looking for optimal solutions. A number of constrained optimization problems can model the user's preferences using the comparative preferences. In this thesis, we develop a set of pruning rules used in the branch and bound technique to efficiently solve this kind of optimization problem. More specifically, we show how to generate newly defined pruning rules from a dominance algorithm that refers to a set of comparative preferences. These rules include pruning approaches (and combinations of them) which can drastically prune the search space. They mainly reduce the number of (expensive) pairwise comparisons performed during the search while guiding constrained optimization algorithms to find optimal solutions. Our experimental results show that the pruning rules that we have developed and their different combinations have varying impact on the performance of the branch and bound technique.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The issue, with international and national overtones, of direct relevance to the present study, relates to the shaping of beginning teachers’ identities in the workplace. As the shift from an initial teacher education programme into initial practice in schools is a period of identity change worthy of investigation, this study focuses on the transformative search by nine beginning primary teachers for their teaching identities, throughout the course of their initial year of occupational experience, post-graduation. The nine beginning teacher participants work in a variety of primary school settings, thus strengthening the representativeness of the research cohort. Privileging ‘insider’ perspectives, the research goal is to understand the complexities of lived experience from the viewpoints of the participating informants. The shaping of identity is conceived of in dimensional terms. Accordingly, a framework composed of three dimensions of beginning teacher experience is devised, namely: contextual; emotional; temporo-spatial. Data collection and analysis is informed by principles derived from sociocultural theories; activity theory; figured worlds theory; and, dialogical self theory. Individual, face-to-face semi-structured interviews, and the maintenance of solicited digital diaries, are the principal methods of data collection employed. The use of a dimensional model fragments the integrated learning experiences of beginning teachers into constituent parts for the purpose of analysis. While acknowledging that the actual journey articulated by each participant is a more complex whole than the sum of its parts, key empirically-based claims are presented as per the dimensional framework employed: contextuality; emotionality; temporo-spatiality. As a result of applying the foci of an international literature to an under-researched aspect of Irish education, this study is offered as a context-specific contribution to the knowledge base on beginning teaching. As the developmental needs of beginning teachers constitute an emerging area of intense policy focus in Ireland, this research undertaking is both relevant and timely.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study aimed to investigate interactions of components in the high solids systems during storage. The systems included (i) lactose–maltodextrin (MD) with various dextrose equivalents at different mixing ratios, (ii) whey protein isolate (WPI)–oil [olive oil (OO) or sunflower oil (SO)] at 75:25 ratio, and (iii) WPI–oil– {glucose (G)–fructose (F) 1:1 syrup [70% (w/w) total solids]} at a component ratio of 45:15:40. Crystallization of lactose was delayed and increasingly inhibited with increasing MD contents and higher DE values (small molecular size or low molecular weight), although all systems showed similar glass transition temperatures at each aw. The water sorption isotherms of non-crystalline lactose and lactose–MD (0.11 to 0.76 aw) could be derived from the sum of sorbed water contents of individual amorphous components. The GAB equation was fitted to data of all non-crystalline systems. The protein–oil and protein–oil–sugar materials showed maximum protein oxidation and disulfide bonding at 2 weeks of storage at 20 and 40°C. The WPI–OO showed denaturation and preaggregation of proteins during storage at both temperatures. The presence of G–F in WPI–oil increased Tonset and Tpeak of protein aggregation, and oxidative damage of the protein during storage, especially in systems with a higher level of unsaturated fatty acids. Lipid oxidation and glycation products in the systems containing sugar promoted oxidation of proteins, increased changes in protein conformation and aggregation of proteins, and resulted in insolubility of solids or increased hydrophobicity concomitantly with hardening of structure, covalent crosslinking of proteins, and formation of stable polymerized solids, especially after storage at 40°C. We found protein hydration transitions preceding denaturation transitions in all high protein systems and also the glass transition of confined water in protein systems using dynamic mechanical analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The abundance of many commercially important fish stocks are declining and this has led to widespread concern on the performance of traditional approach in fisheries management. Quantitative models are used for obtaining estimates of population abundance and the management advice is based on annual harvest levels (TAC), where only a certain amount of catch is allowed from specific fish stocks. However, these models are data intensive and less useful when stocks have limited historical information. This study examined whether empirical stock indicators can be used to manage fisheries. The relationship between indicators and the underlying stock abundance is not direct and hence can be affected by disturbances that may account for both transient and persistent effects. Methods from Statistical Process Control (SPC) theory such as the Cumulative Sum (CUSUM) control charts are useful in classifying these effects and hence they can be used to trigger management response only when a significant impact occurs to the stock biomass. This thesis explores how empirical indicators along with CUSUM can be used for monitoring, assessment and management of fish stocks. I begin my thesis by exploring various age based catch indicators, to identify those which are potentially useful in tracking the state of fish stocks. The sensitivity and response of these indicators towards changes in Spawning Stock Biomass (SSB) showed that indicators based on age groups that are fully selected to the fishing gear or Large Fish Indicators (LFIs) are most useful and robust across the range of scenarios considered. The Decision-Interval (DI-CUSUM) and Self-Starting (SS-CUSUM) forms are the two types of control charts used in this study. In contrast to the DI-CUSUM, the SS-CUSUM can be initiated without specifying a target reference point (‘control mean’) to detect out-of-control (significant impact) situations. The sensitivity and specificity of SS-CUSUM showed that the performances are robust when LFIs are used. Once an out-of-control situation is detected, the next step is to determine how much shift has occurred in the underlying stock biomass. If an estimate of this shift is available, they can be used to update TAC by incorporation into Harvest Control Rules (HCRs). Various methods from Engineering Process Control (EPC) theory were tested to determine which method can measure the shift size in stock biomass with the highest accuracy. Results showed that methods based on Grubb’s harmonic rule gave reliable shift size estimates. The accuracy of these estimates can be improved by monitoring a combined indicator metric of stock-recruitment and LFI because this may account for impacts independent of fishing. The procedure of integrating both SPC and EPC is known as Statistical Process Adjustment (SPA). A HCR based on SPA was designed for DI-CUSUM and the scheme was successful in bringing out-of-control fish stocks back to its in-control state. The HCR was also tested using SS-CUSUM in the context of data poor fish stocks. Results showed that the scheme will be useful for sustaining the initial in-control state of the fish stock until more observations become available for quantitative assessments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Irish monitoring data on PCDD/Fs, DL-PCBs and Marker PCBs were collated and combined with Irish Adult Food Consumption Data, to estimate dietary background exposure of Irish adults to dioxins and PCBs. Furthermore, all available information on the 2008 Irish pork dioxin food contamination incident was collated and analysed with a view to evaluate any potential impact the incident may have had on general dioxin and PCB background exposure levels estimated for the adult population in Ireland. The average upperbound daily intake of Irish adults to dioxins Total WHO TEQ (2005) (PCDD/Fs & DLPCBs) from environmental background contamination, was estimated at 0.3 pg/kg bw/d and at the 95th percentile at 1 pg/kg bw/d. The average upperbound daily intake of Irish adults to the sum of 6 Marker PCBs from environmental background contamination ubiquitous in the environment was estimated at 1.6 ng/kg bw/d and at the 95th percentile at 6.8 ng/kg bw/d. Dietary background exposure estimates for both dioxins and PCBs indicate that the Irish adult population has exposures below the European average, a finding which is also supported by the levels detected in breast milk of Irish mothers. Exposure levels are below health based guidance values and/or Body Burdens associated with the TWI (for dioxins) or associated with a NOAEL (for PCBs). Given the current toxicological knowledge, based on biomarker data and estimated dietary exposure, general background exposure of the Irish adult population to dioxins and PCBs is of no human health concern. In 2008, a porcine fat sample taken as part of the national residues monitoring programme led to the detection of a major feed contamination incidence in the Republic of Ireland. The source of the contamination was traced back to the use of contaminated oil in a direct-drying feed operation system. Congener profiles in animal fat and feed samples showed a high level of consistency and pinpointed the likely source of fuel contamination to be a highly chlorinated commercial PCB mixture. To estimate additional exposure to dioxins and PCBs due to the contamination of pig and cattle herds, collection and a systematic review of all data associated with the contamination incident was conducted. A model was devised that took into account the proportion of contaminated product reaching the final consumer during the 90 day contamination incident window. For a 90 day period, the total additional exposure to Total TEQ (PCDD/F &DL-PCB) WHO (2005) amounted to 407 pg/kg bw/90d at the 95th percentile and 1911 pg/kg bw/90d at the 99th percentile. Exposure estimates derived for both dioxins and PCBs showed that the Body Burden of the general population remained largely unaffected by the contamination incident and approximately 10 % of the adult population in Ireland was exposed to elevated levels of dioxins and PCBs. Whilst people in this 10 % cohort experienced quite a significant additional load to the existing body burden, the estimated exposure values do not indicate approximation of body burdens associated with adverse health effects, based on current knowledge. The exposure period was also limited in time to approximately 3 months, following the FSAI recall of contaminated meat immediately on detection of the contamination. A follow up breast milk study on Irish first time mothers conducted in 2009/2010 did not show any increase in concentrations compared to the study conducted in 2002. The latter supports the conclusion that the majority of the Irish adult population was not affected by the contamination incident.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A digital differentiator simply involves the derivation of an input signal. This work includes the presentation of first-degree and second-degree differentiators, which are designed as both infinite-impulse-response (IIR) filters and finite-impulse-response (FIR) filters. The proposed differentiators have low-pass magnitude response characteristics, thereby rejecting noise frequencies higher than the cut-off frequency. Both steady-state frequency-domain characteristics and Time-domain analyses are given for the proposed differentiators. It is shown that the proposed differentiators perform well when compared to previously proposed filters. When considering the time-domain characteristics of the differentiators, the processing of quantized signals proved especially enlightening, in terms of the filtering effects of the proposed differentiators. The coefficients of the proposed differentiators are obtained using an optimization algorithm, while the optimization objectives include magnitude and phase response. The low-pass characteristic of the proposed differentiators is achieved by minimizing the filter variance. The low-pass differentiators designed show the steep roll-off, as well as having highly accurate magnitude response in the pass-band. While having a history of over three hundred years, the design of fractional differentiator has become a ‘hot topic’ in recent decades. One challenging problem in this area is that there are many different definitions to describe the fractional model, such as the Riemann-Liouville and Caputo definitions. Through use of a feedback structure, based on the Riemann-Liouville definition. It is shown that the performance of the fractional differentiator can be improved in both the frequency-domain and time-domain. Two applications based on the proposed differentiators are described in the thesis. Specifically, the first of these involves the application of second degree differentiators in the estimation of the frequency components of a power system. The second example concerns for an image processing, edge detection application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lovelock terms are polynomial scalar densities in the Riemann curvature tensor that have the remarkable property that their Euler-Lagrange derivatives contain derivatives of the metric of an order not higher than 2 (while generic polynomial scalar densities lead to Euler-Lagrange derivatives with derivatives of the metric of order 4). A characteristic feature of Lovelock terms is that their first nonvanishing term in the expansion g λμ = η λμ + h λμ of the metric around flat space is a total derivative. In this paper, we investigate generalized Lovelock terms defined as polynomial scalar densities in the Riemann curvature tensor and its covariant derivatives (of arbitrarily high but finite order) such that their first nonvanishing term in the expansion of the metric around flat space is a total derivative. This is done by reformulating the problem as a BRST cohomological one and by using cohomological tools. We determine all the generalized Lovelock terms. We find, in fact, that the class of nontrivial generalized Lovelock terms contains only the usual ones. Allowing covariant derivatives of the Riemann tensor does not lead to a new structure. Our work provides a novel algebraic understanding of the Lovelock terms in the context of BRST cohomology. © 2005 IOP Publishing Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Summer Sprite for Orchestra was completed in December, 2004. The piece originated from a singular encounter with little angels at Chang-Kyung Palace, which is the oldest and the most beautiful palace in Korea, and where the kings of the Chosun Dynasty (1393-1897) lived. This encounter was in the summer of 2002. I certainly could not prove that those angels I met were real. Possibly they were the reflection of drops of water after a sudden shower on that summer day. However, I definitely remember that short, unforgettable, and mysterious moment and the angels' beautiful dance-like celebration. Summer Sprite is based on these special memories and the encounter with the little angels that summer. Summer Sprite consists of 3 movements: "Greeting," "Encounter," and "Celebration." These follow the course of my encounter with the little angels. In Summer Sprite, I wished to describe the image of the angels as well as the progression of greeting, encounter, and celebration with them. The moods that follow in Summer Sprite are by turns lyrical, poetic, fantastic, mysterious, and dream-like. In each movement, I describe the meeting of angels and composer through the use of the soloists -- violin (sometimes viola) and cello. As suggested by the subtitle of the first movement, "Greeting" portrays the moment when a surprised I met the angels. It begins with tam-tam, marimba, harp, and piano and sets a mysterious and dark mood. The second movement, "Encounter," is shorter than the first movement. This movement provides a more tranquil mood as well as more unique timbres resulting from the use of mutes and special instruments (English horn, harp, crotales, suspended cymbal, and celesta). The delicate expression of the percussion is particularly important in establishing the static mood of this movement . The last movement, °?Celebration,°± is bright and energetic. It is also the longest. Here, I require the most delicate changes of dynamics and tempo, the most vigorous harmonies, and the fastest rhythmic figures, as well as the most independent, lyrical, and poetic melodies. For bright orchestral tone color, I used various kinds of percussion such as timpani, xylophone, marimba, vibraphone, cymbals, side drum, tambourine, triangle, and bass drum. This last movement is divided rondo-like into five sections: The first (mm.1-3), second (mm.4 - rehearsal number 1), third (rehearsal numbers 2-4), fourth (rehearsal numbers 5-7), and fifth, (rehearsal numbers 8 -18). To sum up, Summer Sprite describes an unforgettable and mysterious moment in a my life. My intention was to portray this through a concerto-like framework. A model for this would be Brahms°Ø °?Double Concerto°± in A minor, op.102, in which the solo cello stands for my angel and the solo violin (sometimes solo viola) for me.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We exploit the distributional information contained in high-frequency intraday data in constructing a simple conditional moment estimator for stochastic volatility diffusions. The estimator is based on the analytical solutions of the first two conditional moments for the latent integrated volatility, the realization of which is effectively approximated by the sum of the squared high-frequency increments of the process. Our simulation evidence indicates that the resulting GMM estimator is highly reliable and accurate. Our empirical implementation based on high-frequency five-minute foreign exchange returns suggests the presence of multiple latent stochastic volatility factors and possible jumps. © 2002 Elsevier Science B.V. All rights reserved.