977 resultados para Empirical Mode Decomposition


Relevância:

30.00% 30.00%

Publicador:

Resumo:

More evenly spread demand for public transport throughout a day can reduce transit service provider‟s total asset and labour costs. A plausible peak spreading strategy is to increase peak fare and/or to reduce off-peak fare. This paper reviews relevant empirical studies for urban rail systems, as rail transit plays a key role in Australian urban passenger transport and experiences severe peak loading variability. The literature is categorised into four groups: a) passenger opinions on willingness to change time for travel, b) valuations of displacement time using stated preference technique, c) simulations of peak spreading based on trip scheduling models, and: d) real-world cases of peak spreading using differential fare. Policy prescription is advised to take into account impacts of traveller‟s time flexibility and joint effects of mode shifting and peak spreading. Although focusing on urban rail, arguments in this paper are relevant to public transport in general with values to researchers and practitioners.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vehicle speed is an important attribute for analysing the utility of a transport mode. The speed relationship between multiple modes of transport is of interest to traffic planners and operators. This paper quantifies the relationship between bus speed and average car speed by integrating Bluetooth data and Transit Signal Priority data from the urban network in Brisbane, Australia. The method proposed in this paper is the first of its kind to relate bus speed and average car speed by integrating multi-source traffic data in a corridor-based method. Three transferable regression models relating not-in-service bus, in-service bus during peak periods, and in-service bus during off-peak periods with average car speed are proposed. The models are cross-validated and the interrelationships are significant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines both theoretically an empirically how well the theories of Norman Holland, David Bleich, Wolfgang Iser and Stanley Fish can explain readers' interpretations of literary texts. The theoretical analysis concentrates on their views on language from the point of view of Wittgenstein's Philosophical Investigations. This analysis shows that many of the assumptions related to language in these theories are problematic. The empirical data show that readers often form very similar interpretations. Thus the study challenges the common assumption that literary interpretations tend to be idiosyncratic. The empirical data consists of freely worded written answers to questions on three short stories. The interpretations were made by 27 Finnish university students. Some of the questions addressed issues that were discussed in large parts of the texts, some referred to issues that were mentioned only in passing or implied. The short stories were "The Witch à la Mode" by D. H. Lawrence, "Rain in the Heart" by Peter Taylor and "The Hitchhiking Game" by Milan Kundera. According to Fish, readers create both the formal features of a text and their interpretation of it according to an interpretive strategy. People who agree form an interpretive community. However, a typical answer usually contains ideas repeated by several readers as well as observations not mentioned by anyone else. Therefore it is very difficult to determine which readers belong to the same interpretive community. Moreover, readers with opposing opinions often seem to pay attention to the same textual features and even acknowledge the possibility of an opposing interpretation; therefore they do not seem to create the formal features of the text in different ways. Iser suggests that an interpretation emerges from the interaction between the text and the reader when the reader determines the implications of the text and in this way fills the "gaps" in the text. Iser believes that the text guides the reader, but as he also believes that meaning is on a level beyond words, he cannot explain how the text directs the reader. The similarity in the interpretations and the fact that the agreement is strongest when related to issues that are discussed broadly in the text do, however, support his assumption that readers are guided by the text. In Bleich's view, all interpretations have personal motives and each person has an idiosyncratic language system. The situation where a person learns a word determines the most important meaning it has for that person. In order to uncover the personal etymologies of words, Bleich asks his readers to associate freely on the basis of a text and note down all the personal memories and feelings that the reading experience evokes. Bleich's theory of the idiosyncratic language system seems to rely on a misconceived notion of the role that ostensive definitions have in language use. The readers' responses show that spontaneous associations to personal life seem to colour the readers' interpretations, but such instances are rather rare. According to Holland, an interpretation reflects the reader's identity theme. Language use is regulated by shared rules, but everyone follows the rules in his or her own way. Words mean different things to different people. The problem with this view is that if there is any basis for language use, it seems to be the shared way of following linguistic rules. Wittgenstein suggests that our understanding of words is related to the shared ways of using words and our understanding of human behaviour. This view seems to give better grounds for understanding similarity and differences in literary interpretations than the theories of Holland, Bleich, Fish and Iser.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Design embraces several disciplines dedicated to the production of artifacts and services. These disciplines are quite independent and only recently has psychological interest focused on them. Nowadays, the psychological theories of design, also called design cognition literature, describe the design process from the information processing viewpoint. These models co-exist with the normative standards of how designs should be crafted. In many places there are concrete discrepancies between these two in a way that resembles the differences between the actual and ideal decision-making. This study aimed to explore the possible difference related to problem decomposition. Decomposition is a standard component of human problem-solving models and is also included in the normative models of design. The idea of decomposition is to focus on a single aspect of the problem at a time. Despite its significance, the nature of decomposition in conceptual design is poorly understood and has only been preliminary investigated. This study addressed the status of decomposition in conceptual design of products using protocol analysis. Previous empirical investigations have argued that there are implicit and explicit decomposition, but have not provided a theoretical basis for these two. Therefore, the current research began by reviewing the problem solving and design literature and then composing a cognitive model of the solution search of conceptual design. The result is a synthetic view which describes recognition and decomposition as the basic schemata for conceptual design. A psychological experiment was conducted to explore decomposition. In the test, sixteen (N=16) senior students of mechanical engineering created concepts for two alternative tasks. The concurrent think-aloud method and protocol analysis were used to study decomposition. The results showed that despite the emphasis on decomposition in the formal education, only few designers (N=3) used decomposition explicitly and spontaneously in the presented tasks, although the designers in general applied a top-down control strategy. Instead, inferring from the use of structured strategies, the designers always relied on implicit decomposition. These results confirm the initial observations found in the literature, but they also suggest that decomposition should be investigated further. In the future, the benefits and possibilities of explicit decomposition should be considered along with the cognitive mechanisms behind decomposition. After that, the current results could be reinterpreted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Matrix decompositions, where a given matrix is represented as a product of two other matrices, are regularly used in data mining. Most matrix decompositions have their roots in linear algebra, but the needs of data mining are not always those of linear algebra. In data mining one needs to have results that are interpretable -- and what is considered interpretable in data mining can be very different to what is considered interpretable in linear algebra. --- The purpose of this thesis is to study matrix decompositions that directly address the issue of interpretability. An example is a decomposition of binary matrices where the factor matrices are assumed to be binary and the matrix multiplication is Boolean. The restriction to binary factor matrices increases interpretability -- factor matrices are of the same type as the original matrix -- and allows the use of Boolean matrix multiplication, which is often more intuitive than normal matrix multiplication with binary matrices. Also several other decomposition methods are described, and the computational complexity of computing them is studied together with the hardness of approximating the related optimization problems. Based on these studies, algorithms for constructing the decompositions are proposed. Constructing the decompositions turns out to be computationally hard, and the proposed algorithms are mostly based on various heuristics. Nevertheless, the algorithms are shown to be capable of finding good results in empirical experiments conducted with both synthetic and real-world data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper provides an empirical estimation of energy efficiency and other proximate factors that explain energy intensity in Australia for the period 1978-2009. The analysis is performed by decomposing the changes in energy intensity by means of energy efficiency, fuel mix and structural changes using sectoral and sub-sectoral levels of data. The results show that the driving forces behind the decrease in energy intensity in Australia are efficiency effect and sectoral composition effect, where the former is found to be more prominent than the latter. Moreover, the favourable impact of the composition effect has slowed consistently in recent years. A perfect positive association characterizes the relationship between energy intensity and carbon intensity in Australia. The decomposition results indicate that Australia needs to improve energy efficiency further to reduce energy intensity and carbon emissions. © 2012 Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The microstructural evolution on aging a Co-3 wt pct Ti-2 wt pct Nb alloy has been followed by transmission electron microscopy and diffraction to show that the solid solution decomposed by the spinodal mode. The strengthening observed has been correlated with the differences in lattice parameters of the coexisting phases. The several stages of coarsening have been documented to yield information about their kinetics and morphological changes.Formerly Visiting Assistant Professor, Department of Mechanical and Industrial Engineering, University of Illinois at Urbana-Champaign, 1206 West Green Street, Urbana, IL 61801, is with .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a conceptual model for the in-plane physics of an earthquake fault. The model employs cellular automaton techniques to simulate tectonic loading, earthquake rupture, and strain redistribution. The impact of a hypothetical crustal elastodynamic Green's function is approximated by a long-range strain redistribution law with a r(-p) dependance. We investigate the influence of the effective elastodynamic interaction range upon the dynamical behaviour of the model by conducting experiments with different values of the exponent (p). The results indicate that this model has two distinct, stable modes of behaviour. The first mode produces a characteristic earthquake distribution with moderate to large events preceeded by an interval of time in which the rate of energy release accelerates. A correlation function analysis reveals that accelerating sequences are associated with a systematic, global evolution of strain energy correlations within the system. The second stable mode produces Gutenberg-Richter statistics, with near-linear energy release and no significant global correlation evolution. A model with effectively short-range interactions preferentially displays Gutenberg-Richter behaviour. However, models with long-range interactions appear to switch between the characteristic and GR modes. As the range of elastodynamic interactions is increased, characteristic behaviour begins to dominate GR behaviour. These models demonstrate that evolution of strain energy correlations may occur within systems with a fixed elastodynamic interaction range. Supposing that similar mode-switching dynamical behaviour occurs within earthquake faults then intermediate-term forecasting of large earthquakes may be feasible for some earthquakes but not for others, in alignment with certain empirical seismological observations. Further numerical investigation of dynamical models of this type may lead to advances in earthquake forecasting research and theoretical seismology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use coherent-mode representation of partially coherent fields to analyze correlated imaging with classical light sources. This formalism is very useful to study the imaging quality. By decomposing the unknown object as the superposition of different coherent modes, the components corresponding to small eigenvalues cannot be well imaged. The generated images depend crucially on the distribution of the eigenvalues of the coherent-mode representation of the source and the decomposition coefficients of the objects. Three kinds of correlated imaging schemes are analyzed numerically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Catalytic decomposition of NO was studied over Fe/NaZSM-5 catalyst. Novel results were observed with the microwave heating mode. The conversion of NO to N-2 increased remarkably with the increasing of Fe loading. The effects of a series of reaction parameters, including reaction temperature, O-2 concentration, NO concentration, gas flow rate and H2O addition, on the productivity of N-2 have been investigated. It is shown that the catalyst exhibited good endurance to excess O-2 in the microwave heating mode. Under all reaction conditions, NO converted predominantly to N-2. The highest conversion of NO to N-2 was up to 70%. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Countries across the world are being challenged to decarbonise their energy systems in response to diminishing fossil fuel reserves, rising GHG emissions and the dangerous threat of climate change. There has been a renewed interest in energy efficiency, renewable energy and low carbon energy as policy‐makers seek to identify and put in place the most robust sustainable energy system that can address this challenge. This thesis seeks to improve the evidence base underpinning energy policy decisions in Ireland with a particular focus on natural gas, which in 2011 grew to have a 30% share of Ireland’s TPER. Natural gas is used in all sectors of the Irish economy and is seen by many as a transition fuel to a low-carbon energy system; it is also a uniquely excellent source of data for many aspects of energy consumption. A detailed decomposition analysis of natural gas consumption in the residential sector quantifies many of the structural drives of change, with activity (R2 = 0.97) and intensity (R2 = 0.69) being the best explainers of changing gas demand. The 2002 residential building regulations are subject to an ex-post evaluation, which using empirical data finds a 44 ±9.5% shortfall in expected energy savings as well as a 13±1.6% level of non-compliance. A detailed energy demand model of the entire Irish energy system is presented together with scenario analysis of a large number of energy efficiency policies, which show an aggregate reduction in TFC of 8.9% compared to a reference scenario. The role for natural gas as a transition fuel over a long time horizon (2005-2050) is analysed using an energy systems model and a decomposition analysis, which shows the contribution of fuel switching to natural gas to be worth 12 percentage points of an overall 80% reduction in CO2 emissions. Finally, an analysis of the potential for CCS in Ireland finds gas CCS to be more robust than coal CCS for changes in fuel prices, capital costs and emissions reduction and the cost optimal location for a gas CCS plant in Ireland is found to be in Cork with sequestration in the depleted gas field of Kinsale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present here evidence for the observation of the magnetohydrodynamic (MHD) sausage modes in magnetic pores in the solar photosphere. Further evidence for the omnipresent nature of acoustic global modes is also found. The empirical decomposition method of wave analysis is used to identify the oscillations detected through a 4170 Å "blue continuum" filter observed with the Rapid Oscillations in the Solar Atmosphere (ROSA) instrument. Out of phase, periodic behavior in pore size and intensity is used as an indicator of the presence of magnetoacoustic sausage oscillations. Multiple signatures of the magnetoacoustic sausage mode are found in a number of pores. The periods range from as short as 30 s up to 450 s. A number of the magnetoacoustic sausage mode oscillations found have periods of 3 and 5 minutes, similar to the acoustic global modes of the solar interior. It is proposed that these global oscillations could be the driver of the sausage-type magnetoacoustic MHD wave modes in pores.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, we investigate an adaptive decomposition and ordering strategy that automatically divides examinations into difficult and easy sets for constructing an examination timetable. The examinations in the difficult set are considered to be hard to place and hence are listed before the ones in the easy set in the construction process. Moreover, the examinations within each set are ordered using different strategies based on graph colouring heuristics. Initially, the examinations are placed into the easy set. During the construction process, examinations that cannot be scheduled are identified as the ones causing infeasibility and are moved forward in the difficult set to ensure earlier assignment in subsequent attempts. On the other hand, the examinations that can be scheduled remain in the easy set.

Within the easy set, a new subset called the boundary set is introduced to accommodate shuffling strategies to change the given ordering of examinations. The proposed approach, which incorporates different ordering and shuffling strategies, is explored on the Carter benchmark problems. The empirical results show that the performance of our algorithm is broadly comparable to existing constructive approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe the use of bivariate 3d empirical orthogonal functions (EOFs) in characterising low frequency variability of the Atlantic thermohaline circulation (THC) in the Hadley Centre global climate model, HadCM3. We find that the leading two modes are well correlated with an index of the meridional overturning circulation (MOC) on decadal timescales, with the leading mode alone accounting for 54% of the decadal variance. Episodes of coherent oscillations in the sub-space of the leading EOFs are identified; these episodes are of great interest for the predictability of the THC, and could indicate the existence of different regimes of natural variability. The mechanism identified for the multi-decadal variability is an internal ocean mode, dominated by changes in convection in the Nordic Seas, which lead the changes in the MOC by a few years. Variations in salinity transports from the Arctic and from the North Atlantic are the main feedbacks which control the oscillation. This mode has a weak feedback onto the atmosphere and hence a surface climatic influence. Interestingly, some of these climate impacts lead the changes in the overturning. There are also similarities to observed multi-decadal climate variability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The currently accepted mechanism of trioxane antimalarial action involves generation of free radicals within or near susceptible sites probably arising from the production of distonic radical anions. An alternative mechanistic proposal involving the ionic scission of the peroxide group and consequent generation of a carbocation at C-4 has been suggested to account for antimalarial activity. We have investigated this latter mechanism using DFT (B3LYP/6-31+G* level) and established the preferred Lewis acid protonation sites (artemisinin O5a >> O4a approximate to O3a > O2a > O1a; arteether O4a >= O3a > O5b >> O2a > O1a; Figure 3) and the consequent decomposition pathways and hydrolysis sites. In neither molecule is protonation likely to occur on the peroxide bond O1-O2 and therefore lead to scission. Therefore, the alternative radical pathway remains the likeliest explanation for antimalarial action.