992 resultados para Linear erosion processes


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The prediction filters are well known models for signal estimation, in communications, control and many others areas. The classical method for deriving linear prediction coding (LPC) filters is often based on the minimization of a mean square error (MSE). Consequently, second order statistics are only required, but the estimation is only optimal if the residue is independent and identically distributed (iid) Gaussian. In this paper, we derive the ML estimate of the prediction filter. Relationships with robust estimation of auto-regressive (AR) processes, with blind deconvolution and with source separation based on mutual information minimization are then detailed. The algorithm, based on the minimization of a high-order statistics criterion, uses on-line estimation of the residue statistics. Experimental results emphasize on the interest of this approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study analyzed high-density event-related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task-irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory-visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross-modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non-linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top-down attentional control that further modulates cross-modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context-based control over multisensory processing, whose influences multiplex across finer and broader time scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article a two-dimensional transient boundary element formulation based on the mass matrix approach is discussed. The implicit formulation of the method to deal with elastoplastic analysis is considered, as well as the way to deal with viscous damping effects. The time integration processes are based on the Newmark rhoand Houbolt methods, while the domain integrals for mass, elastoplastic and damping effects are carried out by the well known cell approximation technique. The boundary element algebraic relations are also coupled with finite element frame relations to solve stiffened domains. Some examples to illustrate the accuracy and efficiency of the proposed formulation are also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global challenges, complexity and continuous uncertainty demand development of leadership approaches, employees and multi-organisation constellations. Current leadership theories do not sufficiently address the needs of complex business environments. First of all, before successful leadership models can be applied in practice, leadership needs to shift from the industrial age to the knowledge era. Many leadership models still view leadership solely through the perspective of linear process thinking. In addition, there is not enough knowledge or experience in applying these newer models in practice. Leadership theories continue to be based on the assumption that leaders possess or have access to all the relevant knowledge and capabilities to decide future directions without external advice. In many companies, however, the workforce consists of skilled professionals whose work and related interfaces are so challenging that the leaders cannot grasp all the linked viewpoints and cross-impacts alone. One of the main objectives of this study is to understand how to support participants in organisations and their stakeholders to, through practice-based innovation processes, confront various environments. Another aim is to find effective ways of recognising and reacting to diverse contexts, so companies and other stakeholders are better able to link to knowledge flows and shared value creation processes in advancing joint value to their customers. The main research question of this dissertation is, then, to seek understanding of how to enhance leadership in complex environments. The dissertation can, on the whole, be characterised as a qualitative multiple-case study. The research questions and objectives were investigated through six studies published in international scientific journals. The main methods applied were interviews, action research and a survey. The empirical focus was on Finnish companies, and the research questions were examined in various organisations at the top levels (leaders and managers) and bottom levels (employees) in the context of collaboration between organisations and cooperation between case companies and their client organisations. However, the emphasis of the analysis is the internal and external aspects of organisations, which are conducted in practice-based innovation processes. The results of this study suggest that the Cynefin framework, complexity leadership theory and transformational leadership represent theoretical models applicable to developing leadership through practice-based innovation. In and of themselves, they all support confronting contemporary challenges, but an implementable method for organisations may be constructed by assimilating them into practice-based innovation processes. Recognition of diverse environments, their various contexts and roles in the activities and collaboration of organisations and their interest groups is ever-more important to achieving better interaction in which a strategic or formal status may be bypassed. In innovation processes, it is not necessarily the leader who is in possession of the essential knowledge; thus, it is the role of leadership to offer methods and arenas where different actors may generate advances. Enabling and supporting continuous interaction and integrated knowledge flows is of crucial importance, to achieve emergence of innovations in the activities of organisations and various forms of collaboration. The main contribution of this dissertation relates to applying these new conceptual models in practice. Empirical evidence on the relevance of different leadership roles in practice-based innovation processes in Finnish companies is another valuable contribution. Finally, the dissertation sheds light on the significance of combining complexity science with leadership and innovation theories in research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Meandering rivers have been perceived to evolve rather similarly around the world independently of the location or size of the river. Despite the many consistent processes and characteristics they have also been noted to show complex and unique sets of fluviomorphological processes in which local factors play important role. These complex interactions of flow and morphology affect notably the development of the river. Comprehensive and fundamental field, flume and theoretically based studies of fluviomorphological processes in meandering rivers have been carried out especially during the latter part of the 20th century. However, as these studies have been carried out with traditional field measurements techniques their spatial and temporal resolution is not competitive to the level achievable today. The hypothesis of this study is that, by exploiting e increased spatial and temporal resolution of the data, achieved by combining conventional field measurements with a range of modern technologies, will provide new insights to the spatial patterns of the flow-sediment interaction in meandering streams, which have perceived to show notable variation in space and time. This thesis shows how the modern technologies can be combined to derive very high spatial and temporal resolution data on fluvio-morphological processes over meander bends. The flow structure over the bends is recorded in situ using acoustic Doppler current profiler (ADCP) and the spatial and temporal resolution of the flow data is enhanced using 2D and 3D CFD over various meander bends. The CFD are also exploited to simulate sediment transport. Multi-temporal terrestrial laser scanning (TLS), mobile laser scanning (MLS) and echo sounding data are used to measure the flow-based changes and formations over meander bends and to build the computational models. The spatial patterns of erosion and deposition over meander bends are analysed relative to the measured and modelled flow field and sediment transport. The results are compared with the classic theories of the processes in meander bends. Mainly, the results of this study follow well the existing theories and results of previous studies. However, some new insights regarding to the spatial and temporal patterns of the flow-sediment interaction in a natural sand-bed meander bend are provided. The results of this study show the advantages of the rapid and detailed measurements techniques and the achieved spatial and temporal resolution provided by CFD, unachievable with field measurements. The thesis also discusses the limitations which remain in the measurement and modelling methods and in understanding of fluvial geomorphology of meander bends. Further, the hydro- and morphodynamic models’ sensitivity to user-defined parameters is tested, and the modelling results are assessed against detailed field measurement. The study is implemented in the meandering sub-Arctic Pulmanki River in Finland. The river is unregulated and sand-bed and major morphological changes occur annually on the meander point bars, which are inundated only during the snow-melt-induced spring floods. The outcome of this study applies to sandbed meandering rivers in regions where normally one significant flood event occurs annually, such as Arctic areas with snow-melt induced spring floods, and where the point bars of the meander bends are inundated only during the flood events.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The estimation of losses plays a key role in the process of building any electrical machine. How to estimate those losses while designing any machine; by obtaining the characteristic of the electrical steel from the catalogue and calculate the losses. However, this way is inaccurate since the electrical steel performs several manufacturing processes during the process of building any machine, which affects directly the magnetic property of the electrical steel and accordingly the characteristic of the electrical steel will be affected. That means the B–H curve of the steel that was obtained from the catalogue will be changed. Moreover, during loading and rotating the machine, some important changes occur to the B–H characteristic of the electrical steel such as the stress on the laminated iron. Accordingly, the pre-estimated losses are completely far from the actual losses because they were estimated based on the data of the electrical steel obtained from the catalogue. So in order to estimate the losses precisely significant factors of the manufacturing processes must be included. The paper introduces the systematic estimation of the losses including the effect of one of the manufacturing factors. Similarly, any other manufacturing factor can be included in the pre-designed losses estimations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relationship between the child's cogni tive development and neurological maturation has been of theoretical interest for many year s. Due to diff iculties such as the lack of sophisticated techniques for measur ing neurolog ical changes and a paucity of normative data, few studies exist that have attempted to correlate the two factors. Recent theory on intellectual development has proposed that neurological maturation may be a factor in the increase of short-term memory storage space. Improved technology has allowed reliable recordings of neurolog ical maturation.. In an attempt to correlate cogni tive development and neurological maturation, this study tested 3-and II-year old children. Fine motor and gross motor short-term memory tests were used to index cogni tive development. Somatosensory evoked potentials elici ted by median nerve stimulation were used to measure the time required for the sensation to pass along the nerve to specific points on the somatosensory pathway. Times were recorded for N14, N20, and P22 interpeak latencies. Maturation of the central nervous system (brain and spinal cord) and the peripheral nervous system (outside the brain and spinal cord) was indi~ated by the recorded times. Signif icant developmental di fferences occurred between 3-and ll-year-olds in memory levels, per ipheral conduction velocity and central conduction times. Linear regression analyses showed that as age increased, memory levels increased and central conduction times decreased. Between the ll-year-old groups, there were no significant differences in central or peripheral nervous system maturation between subjects who achieved a 12 plus score on the digit span test of the WISC-R and those who scored 7 or lower on the same test. Levels achieved on the experimental gross and fine motor short-term memory tests differed significantly within the ll-year-old group.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we develop finite-sample inference procedures for stationary and nonstationary autoregressive (AR) models. The method is based on special properties of Markov processes and a split-sample technique. The results on Markovian processes (intercalary independence and truncation) only require the existence of conditional densities. They are proved for possibly nonstationary and/or non-Gaussian multivariate Markov processes. In the context of a linear regression model with AR(1) errors, we show how these results can be used to simplify the distributional properties of the model by conditioning a subset of the data on the remaining observations. This transformation leads to a new model which has the form of a two-sided autoregression to which standard classical linear regression inference techniques can be applied. We show how to derive tests and confidence sets for the mean and/or autoregressive parameters of the model. We also develop a test on the order of an autoregression. We show that a combination of subsample-based inferences can improve the performance of the procedure. An application to U.S. domestic investment data illustrates the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Social interactions arguably provide a rationale for several important phenomena, from smoking and other risky behavior in teens to e.g., peer effects in school performance. We study social interactions in dynamic economies. For these economies, we provide existence (Markov Perfect Equilibrium in pure strategies), ergodicity, and welfare results. Also, we characterize equilibria in terms of agents' policy function, spatial equilibrium correlations and social multiplier effects, depending on the nature of interactions. Most importantly, we study formally the issue of the identification of social interactions, with special emphasis on the restrictions imposed by dynamic equilibrium conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Avec les avancements de la technologie de l'information, les données temporelles économiques et financières sont de plus en plus disponibles. Par contre, si les techniques standard de l'analyse des séries temporelles sont utilisées, une grande quantité d'information est accompagnée du problème de dimensionnalité. Puisque la majorité des séries d'intérêt sont hautement corrélées, leur dimension peut être réduite en utilisant l'analyse factorielle. Cette technique est de plus en plus populaire en sciences économiques depuis les années 90. Étant donnée la disponibilité des données et des avancements computationnels, plusieurs nouvelles questions se posent. Quels sont les effets et la transmission des chocs structurels dans un environnement riche en données? Est-ce que l'information contenue dans un grand ensemble d'indicateurs économiques peut aider à mieux identifier les chocs de politique monétaire, à l'égard des problèmes rencontrés dans les applications utilisant des modèles standards? Peut-on identifier les chocs financiers et mesurer leurs effets sur l'économie réelle? Peut-on améliorer la méthode factorielle existante et y incorporer une autre technique de réduction de dimension comme l'analyse VARMA? Est-ce que cela produit de meilleures prévisions des grands agrégats macroéconomiques et aide au niveau de l'analyse par fonctions de réponse impulsionnelles? Finalement, est-ce qu'on peut appliquer l'analyse factorielle au niveau des paramètres aléatoires? Par exemple, est-ce qu'il existe seulement un petit nombre de sources de l'instabilité temporelle des coefficients dans les modèles macroéconomiques empiriques? Ma thèse, en utilisant l'analyse factorielle structurelle et la modélisation VARMA, répond à ces questions à travers cinq articles. Les deux premiers chapitres étudient les effets des chocs monétaire et financier dans un environnement riche en données. Le troisième article propose une nouvelle méthode en combinant les modèles à facteurs et VARMA. Cette approche est appliquée dans le quatrième article pour mesurer les effets des chocs de crédit au Canada. La contribution du dernier chapitre est d'imposer la structure à facteurs sur les paramètres variant dans le temps et de montrer qu'il existe un petit nombre de sources de cette instabilité. Le premier article analyse la transmission de la politique monétaire au Canada en utilisant le modèle vectoriel autorégressif augmenté par facteurs (FAVAR). Les études antérieures basées sur les modèles VAR ont trouvé plusieurs anomalies empiriques suite à un choc de la politique monétaire. Nous estimons le modèle FAVAR en utilisant un grand nombre de séries macroéconomiques mensuelles et trimestrielles. Nous trouvons que l'information contenue dans les facteurs est importante pour bien identifier la transmission de la politique monétaire et elle aide à corriger les anomalies empiriques standards. Finalement, le cadre d'analyse FAVAR permet d'obtenir les fonctions de réponse impulsionnelles pour tous les indicateurs dans l'ensemble de données, produisant ainsi l'analyse la plus complète à ce jour des effets de la politique monétaire au Canada. Motivée par la dernière crise économique, la recherche sur le rôle du secteur financier a repris de l'importance. Dans le deuxième article nous examinons les effets et la propagation des chocs de crédit sur l'économie réelle en utilisant un grand ensemble d'indicateurs économiques et financiers dans le cadre d'un modèle à facteurs structurel. Nous trouvons qu'un choc de crédit augmente immédiatement les diffusions de crédit (credit spreads), diminue la valeur des bons de Trésor et cause une récession. Ces chocs ont un effet important sur des mesures d'activité réelle, indices de prix, indicateurs avancés et financiers. Contrairement aux autres études, notre procédure d'identification du choc structurel ne requiert pas de restrictions temporelles entre facteurs financiers et macroéconomiques. De plus, elle donne une interprétation des facteurs sans restreindre l'estimation de ceux-ci. Dans le troisième article nous étudions la relation entre les représentations VARMA et factorielle des processus vectoriels stochastiques, et proposons une nouvelle classe de modèles VARMA augmentés par facteurs (FAVARMA). Notre point de départ est de constater qu'en général les séries multivariées et facteurs associés ne peuvent simultanément suivre un processus VAR d'ordre fini. Nous montrons que le processus dynamique des facteurs, extraits comme combinaison linéaire des variables observées, est en général un VARMA et non pas un VAR comme c'est supposé ailleurs dans la littérature. Deuxièmement, nous montrons que même si les facteurs suivent un VAR d'ordre fini, cela implique une représentation VARMA pour les séries observées. Alors, nous proposons le cadre d'analyse FAVARMA combinant ces deux méthodes de réduction du nombre de paramètres. Le modèle est appliqué dans deux exercices de prévision en utilisant des données américaines et canadiennes de Boivin, Giannoni et Stevanovic (2010, 2009) respectivement. Les résultats montrent que la partie VARMA aide à mieux prévoir les importants agrégats macroéconomiques relativement aux modèles standards. Finalement, nous estimons les effets de choc monétaire en utilisant les données et le schéma d'identification de Bernanke, Boivin et Eliasz (2005). Notre modèle FAVARMA(2,1) avec six facteurs donne les résultats cohérents et précis des effets et de la transmission monétaire aux États-Unis. Contrairement au modèle FAVAR employé dans l'étude ultérieure où 510 coefficients VAR devaient être estimés, nous produisons les résultats semblables avec seulement 84 paramètres du processus dynamique des facteurs. L'objectif du quatrième article est d'identifier et mesurer les effets des chocs de crédit au Canada dans un environnement riche en données et en utilisant le modèle FAVARMA structurel. Dans le cadre théorique de l'accélérateur financier développé par Bernanke, Gertler et Gilchrist (1999), nous approximons la prime de financement extérieur par les credit spreads. D'un côté, nous trouvons qu'une augmentation non-anticipée de la prime de financement extérieur aux États-Unis génère une récession significative et persistante au Canada, accompagnée d'une hausse immédiate des credit spreads et taux d'intérêt canadiens. La composante commune semble capturer les dimensions importantes des fluctuations cycliques de l'économie canadienne. L'analyse par décomposition de la variance révèle que ce choc de crédit a un effet important sur différents secteurs d'activité réelle, indices de prix, indicateurs avancés et credit spreads. De l'autre côté, une hausse inattendue de la prime canadienne de financement extérieur ne cause pas d'effet significatif au Canada. Nous montrons que les effets des chocs de crédit au Canada sont essentiellement causés par les conditions globales, approximées ici par le marché américain. Finalement, étant donnée la procédure d'identification des chocs structurels, nous trouvons des facteurs interprétables économiquement. Le comportement des agents et de l'environnement économiques peut varier à travers le temps (ex. changements de stratégies de la politique monétaire, volatilité de chocs) induisant de l'instabilité des paramètres dans les modèles en forme réduite. Les modèles à paramètres variant dans le temps (TVP) standards supposent traditionnellement les processus stochastiques indépendants pour tous les TVPs. Dans cet article nous montrons que le nombre de sources de variabilité temporelle des coefficients est probablement très petit, et nous produisons la première évidence empirique connue dans les modèles macroéconomiques empiriques. L'approche Factor-TVP, proposée dans Stevanovic (2010), est appliquée dans le cadre d'un modèle VAR standard avec coefficients aléatoires (TVP-VAR). Nous trouvons qu'un seul facteur explique la majorité de la variabilité des coefficients VAR, tandis que les paramètres de la volatilité des chocs varient d'une façon indépendante. Le facteur commun est positivement corrélé avec le taux de chômage. La même analyse est faite avec les données incluant la récente crise financière. La procédure suggère maintenant deux facteurs et le comportement des coefficients présente un changement important depuis 2007. Finalement, la méthode est appliquée à un modèle TVP-FAVAR. Nous trouvons que seulement 5 facteurs dynamiques gouvernent l'instabilité temporelle dans presque 700 coefficients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a novel, simple, efficient and distribution-free re-sampling technique for developing prediction intervals for returns and volatilities following ARCH/GARCH models. In particular, our key idea is to employ a Box–Jenkins linear representation of an ARCH/GARCH equation and then to adapt a sieve bootstrap procedure to the nonlinear GARCH framework. Our simulation studies indicate that the new re-sampling method provides sharp and well calibrated prediction intervals for both returns and volatilities while reducing computational costs by up to 100 times, compared to other available re-sampling techniques for ARCH/GARCH models. The proposed procedure is illustrated by an application to Yen/U.S. dollar daily exchange rate data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sediment composition is mainly controlled by the nature of the source rock(s), and chemical (weathering) and physical processes (mechanical crushing, abrasion, hydrodynamic sorting) during alteration and transport. Although the factors controlling these processes are conceptually well understood, detailed quantification of compositional changes induced by a single process are rare, as are examples where the effects of several processes can be distinguished. The present study was designed to characterize the role of mechanical crushing and sorting in the absence of chemical weathering. Twenty sediment samples were taken from Alpine glaciers that erode almost pure granitoid lithologies. For each sample, 11 grain-size fractions from granules to clay (ø grades <-1 to >9) were separated, and each fraction was analysed for its chemical composition. The presence of clear steps in the box-plots of all parts (in adequate ilr and clr scales) against ø is assumed to be explained by typical crystal size ranges for the relevant mineral phases. These scatter plots and the biplot suggest a splitting of the full grain size range into three groups: coarser than ø=4 (comparatively rich in SiO2, Na2O, K2O, Al2O3, and dominated by “felsic” minerals like quartz and feldspar), finer than ø=8 (comparatively rich in TiO2, MnO, MgO, Fe2O3, mostly related to “mafic” sheet silicates like biotite and chlorite), and intermediate grains sizes (4≤ø <8; comparatively rich in P2O5 and CaO, related to apatite, some feldspar). To further test the absence of chemical weathering, the observed compositions were regressed against three explanatory variables: a trend on grain size in ø scale, a step function for ø≥4, and another for ø≥8. The original hypothesis was that the trend could be identified with weathering effects, whereas each step function would highlight those minerals with biggest characteristic size at its lower end. Results suggest that this assumption is reasonable for the step function, but that besides weathering some other factors (different mechanical behavior of minerals) have also an important contribution to the trend. Key words: sediment, geochemistry, grain size, regression, step function

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis está dividida en dos partes: en la primera parte se presentan y estudian los procesos telegráficos, los procesos de Poisson con compensador telegráfico y los procesos telegráficos con saltos. El estudio presentado en esta primera parte incluye el cálculo de las distribuciones de cada proceso, las medias y varianzas, así como las funciones generadoras de momentos entre otras propiedades. Utilizando estas propiedades en la segunda parte se estudian los modelos de valoración de opciones basados en procesos telegráficos con saltos. En esta parte se da una descripción de cómo calcular las medidas neutrales al riesgo, se encuentra la condición de no arbitraje en este tipo de modelos y por último se calcula el precio de las opciones Europeas de compra y venta.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The linear viscoelastic (LVE) spectrum is one of the primary fingerprints of polymer solutions and melts, carrying information about most relaxation processes in the system. Many single chain theories and models start with predicting the LVE spectrum to validate their assumptions. However, until now, no reliable linear stress relaxation data were available from simulations of multichain systems. In this work, we propose a new efficient way to calculate a wide variety of correlation functions and mean-square displacements during simulations without significant additional CPU cost. Using this method, we calculate stress−stress autocorrelation functions for a simple bead−spring model of polymer melt for a wide range of chain lengths, densities, temperatures, and chain stiffnesses. The obtained stress−stress autocorrelation functions were compared with the single chain slip−spring model in order to obtain entanglement related parameters, such as the plateau modulus or the molecular weight between entanglements. Then, the dependence of the plateau modulus on the packing length is discussed. We have also identified three different contributions to the stress relaxation:  bond length relaxation, colloidal and polymeric. Their dependence on the density and the temperature is demonstrated for short unentangled systems without inertia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of Research Theme 4 (RT4) was to advance understanding of the basic science issues at the heart of the ENSEMBLES project, focusing on the key processes that govern climate variability and change, and that determine the predictability of climate. Particular attention was given to understanding linear and non-linear feedbacks that may lead to climate surprises,and to understanding the factors that govern the probability of extreme events. Improved understanding of these issues will contribute significantly to the quantification and reduction of uncertainty in seasonal to decadal predictions and projections of climate change. RT4 exploited the ENSEMBLES integrations (stream 1) performed in RT2A as well as undertaking its own experimentation to explore key processes within the climate system. It was working at the cutting edge of problems related to climate feedbacks, the interaction between climate variability and climate change � especially how climate change pertains to extreme events, and the predictability of the climate system on a range of time-scales. The statisticalmethodologies developed for extreme event analysis are new and state-of-the-art. The RT4-coordinated experiments, which have been conducted with six different atmospheric GCMs forced by common timeinvariant sea surface temperature (SST) and sea-ice fields (removing some sources of inter-model variability), are designed to help to understand model uncertainty (rather than scenario or initial condition uncertainty) in predictions of the response to greenhouse-gas-induced warming. RT4 links strongly with RT5 on the evaluation of the ENSEMBLES prediction system and feeds back its results to RT1 to guide improvements in the Earth system models and, through its research on predictability, to steer the development of methods for initialising the ensembles