344 resultados para jumps


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work deals with the nonlinear piezoelectric coupling in vibration-based energy harvesting, done by A. Triplett and D.D. Quinn in J. of Intelligent Material Syst. and Structures (2009). In that paper the first order nonlinear fundamental equation has a three dimensional state variable. Introducing both observable and control variables in such a way the controlled system became a SISO system, we can obtain as a corollary that for a particular choice of the observable variable it is possible to present an explicit functional relation between this variable one, and the variable representing the charge harvested. After-by observing that the structure in the Input-Output decomposition essentially changes depending on the relative degree changes, presenting bifurcation branches in its zero dynamics-we are able in to identify this type of bifurcation indicating its close relation with the Hartman - Grobman theorem telling about decomposition into stable and the unstable manifolds for hyperbolic points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The erbium-based manganite ErMnO3 has been partially substituted at the manganese site by the transition-metal elements Ni and Co. The perovskite orthorhombic structure is found from x(Ni) = 0.2-0.5 in the nickel-based solid solution ErNixMn1-xO3, while it can be extended up to x(Co) = 0.7 in the case of cobalt, provided that the synthesis is performed under oxygenation conditions to favor the presence of Co3+. Presence of different magnetic entities (i.e., Er3+, Ni2+, Co2+, Co3+, Mn3+, and Mn4+) leads to quite unusual magnetic properties, characterized by the coexistence of antiferromagnetic and ferromagnetic interactions. In ErNixMn1-xO3, a critical concentration x(crit)(Ni) = 1/3 separates two regimes: spin-canted AF interactions predominate at x < x(crit), while the ferromagnetic behavior is enhanced for x > x(crit). Spin reversal phenomena are present both in the nickel- and cobalt-based compounds. A phenomenological model based on two interacting sublattices, coupled by an antiferromagnetic exchange interaction, explains the inversion of the overall magnetic moment at low temperatures. In this model, the ferromagnetic transition-metal lattice, which orders at T-c, creates a strong local field at the erbium site, polarizing the Er moments in a direction opposite to the applied field. At low temperatures, when the contribution of the paramagnetic erbium sublattice, which varies as T-1, gets larger than the ferromagnetic contribution, the total magnetic moment changes its sign, leading to an overall ferrimagnetic state. The half-substituted compound ErCo0.50Mn0.50O3 was studied in detail, since the magnetization loops present two well-identified anomalies: an intersection of the magnetization branches at low fields, and magnetization jumps at high fields. The influence of the oxidizing conditions was studied in other compositions close to the 50/50 = Mn/Co substitution rate. These anomalies are clearly connected to the spin inversion phenomena and to the simultaneous presence of Co2+ and Co3+ magnetic moments. Dynamical aspects should be considered to well identify the high-field anomaly, since it depends on the magnetic field sweep rate. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with a stochastic optimal control problem involving discrete-time jump Markov linear systems. The jumps or changes between the system operation modes evolve according to an underlying Markov chain. In the model studied, the problem horizon is defined by a stopping time τ which represents either, the occurrence of a fix number N of failures or repairs (TN), or the occurrence of a crucial failure event (τΔ), after which the system is brought to a halt for maintenance. In addition, an intermediary mixed case for which T represents the minimum between TN and τΔ is also considered. These stopping times coincide with some of the jump times of the Markov state and the information available allows the reconfiguration of the control action at each jump time, in the form of a linear feedback gain. The solution for the linear quadratic problem with complete Markov state observation is presented. The solution is given in terms of recursions of a set of algebraic Riccati equations (ARE) or a coupled set of algebraic Riccati equation (CARE).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we consider the stochastic optimal control problem of discrete-time linear systems subject to Markov jumps and multiplicative noises under two criteria. The first one is an unconstrained mean-variance trade-off performance criterion along the time, and the second one is a minimum variance criterion along the time with constraints on the expected output. We present explicit conditions for the existence of an optimal control strategy for the problems, generalizing previous results in the literature. We conclude the paper by presenting a numerical example of a multi-period portfolio selection problem with regime switching in which it is desired to minimize the sum of the variances of the portfolio along the time under the restriction of keeping the expected value of the portfolio greater than some minimum values specified by the investor. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

[EN] Leptin and osteocalcin play a role in the regulation of the fat-bone axis and may be altered by exercise. To determine whether osteocalcin reduces fat mass in humans fed ad libitum and if there is a sex dimorphism in the serum osteocalcin and leptin responses to strength training, we studied 43 male (age 23.9 2.4 yr, mean +/- SD) and 23 female physical education students (age 23.2 +/- 2.7 yr). Subjects were randomly assigned to two groups: training (TG) and control (CG). TG followed a strength combined with plyometric jumps training program during 9 wk, whereas the CG did not train. Physical fitness, body composition (dual-energy X-ray absorptiometry), and serum concentrations of hormones were determined pre- and posttraining. In the whole group of subjects (pretraining), the serum concentration of osteocalcin was positively correlated (r = 0.29-0.42, P < 0.05) with whole body and regional bone mineral content, lean mass, dynamic strength, and serum-free testosterone concentration (r = 0.32). However, osteocalcin was negatively correlated with leptin concentration (r = -0.37), fat mass (r = -0.31), and the percent body fat (r = -0.44). Both sexes experienced similar relative improvements in performance, lean mass (+4-5%), and whole body (+0.78%) and lumbar spine bone mineral content (+1.2-2%) with training. Serum osteocalcin concentration was increased after training by 45 and 27% in men and women, respectively (P < 0.05). Fat mass was not altered by training. Vastus lateralis type II MHC composition at the start of the training program predicted 25% of the osteocalcin increase after training. Serum leptin concentration was reduced with training in women. In summary, while the relative effects of strength training plus plyometric jumps in performance, muscle hypertrophy, and osteogenesis are similar in men and women, serum leptin concentration is reduced only in women. The osteocalcin response to strength training is, in part, modulated by the muscle phenotype (MHC isoform composition). Despite the increase in osteocalcin, fat mass was not reduced.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fig. 1. Classical hydraulic jump with partially developed inflow conditions. F1 = 13.6, V1 = 4.7 m/s, B = 0.25 m, h = 0.020 mm, d1 = 0.012 mm, Q = 14 L/s. Photo courtesy of Dr. Hubert Chanson. published in: Geomorphology Volume 82, Issues 1-2, 6 December 2006, Pages 146-159 The Hydrology and Geomorphology of Bedrock Rivers doi:10.1016/j.geomorph.2005.09.024 Submerged and unsubmerged natural hydraulic jumps in a bedrock step-pool mountain channel Brett L. Vallé and Gregory B. Pasternacka

Relevância:

20.00% 20.00%

Publicador:

Resumo:

* The aim of this study was to determine the evolutionary time line for rust fungi and date key speciation events using a molecular clock. Evidence is provided that supports a contemporary view for a recent origin of rust fungi, with a common ancestor on a flowering plant. * Divergence times for > 20 genera of rust fungi were studied with Bayesian evolutionary analyses. A relaxed molecular clock was applied to ribosomal and mitochondrial genes, calibrated against estimated divergence times for the hosts of rust fungi, such as Acacia (Fabaceae), angiosperms and the cupressophytes. * Results showed that rust fungi shared a most recent common ancestor with a mean age between 113 and 115 million yr. This dates rust fungi to the Cretaceous period, which is much younger than previous estimations. Host jumps, whether taxonomically large or between host genera in the same family, most probably shaped the diversity of rust genera. Likewise, species diversified by host shifts (through coevolution) or via subsequent host jumps. This is in contrast to strict coevolution with their hosts. * Puccinia psidii was recovered in Sphaerophragmiaceae, a family distinct from Raveneliaceae, which were regarded as confamilial in previous studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Design as seen from the designer's perspective is a series of amazing imaginative jumps or creative leaps. But design as seen by the design historian is a smooth progression or evolution of ideas that they seem self-evident and inevitable after the event. But the next step is anything but obvious for the artist/creator/inventor/designer stuck at that point just before the creative leap. They know where they have come from and have a general sense of where they are going, but often do not have a precise target or goal. This is why it is misleading to talk of design as a problem-solving activity - it is better defined as a problem-finding activity. This has been very frustrating for those trying to assist the design process with computer-based, problem-solving techniques. By the time the problem has been defined, it has been solved. Indeed the solution is often the very definition of the problem. Design must be creative-or it is mere imitation. But since this crucial creative leap seem inevitable after the event, the question must arise, can we find some way of searching the space ahead? Of course there are serious problems of knowing what we are looking for and the vastness of the search space. It may be better to discard altogether the term "searching" in the context of the design process: Conceptual analogies such as search, search spaces and fitness landscapes aim to elucidate the design process. However, the vastness of the multidimensional spaces involved make these analogies misguided and they thereby actually result in further confounding the issue. The term search becomes a misnomer since it has connotations that imply that it is possible to find what you are looking for. In such vast spaces the term search must be discarded. Thus, any attempt at searching for the highest peak in the fitness landscape as an optimal solution is also meaningless. Futhermore, even the very existence of a fitness landscape is fallacious. Although alternatives in the same region of the vast space can be compared to one another, distant alternatives will stem from radically different roots and will therefore not be comparable in any straightforward manner (Janssen 2000). Nevertheless we still have this tantalizing possibility that if a creative idea seems inevitable after the event, then somehow might the process be rserved? This may be as improbable as attempting to reverse time. A more helpful analogy is from nature, where it is generally assumed that the process of evolution is not long-term goal directed or teleological. Dennett points out a common minsunderstanding of Darwinism: the idea that evolution by natural selection is a procedure for producing human beings. Evolution can have produced humankind by an algorithmic process, without its being true that evolution is an algorithm for producing us. If we were to wind the tape of life back and run this algorithm again, the likelihood of "us" being created again is infinitesimally small (Gould 1989; Dennett 1995). But nevertheless Mother Nature has proved a remarkably successful, resourceful, and imaginative inventor generating a constant flow of incredible new design ideas to fire our imagination. Hence the current interest in the potential of the evolutionary paradigm in design. These evolutionary methods are frequently based on techniques such as the application of evolutionary algorithms that are usually thought of as search algorithms. It is necessary to abandon such connections with searching and see the evolutionary algorithm as a direct analogy with the evolutionary processes of nature. The process of natural selection can generate a wealth of alternative experiements, and the better ones survive. There is no one solution, there is no optimal solution, but there is continuous experiment. Nature is profligate with her prototyping and ruthless in her elimination of less successful experiments. Most importantly, nature has all the time in the world. As designers we cannot afford prototyping and ruthless experiment, nor can we operate on the time scale of the natural design process. Instead we can use the computer to compress space and time and to perform virtual prototyping and evaluation before committing ourselves to actual prototypes. This is the hypothesis underlying the evolutionary paradigm in design (1992, 1995).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A major focus of research in nanotechnology is the development of novel, high throughput techniques for fabrication of arbitrarily shaped surface nanostructures of sub 100 nm to atomic scale. A related pursuit is the development of simple and efficient means for parallel manipulation and redistribution of adsorbed atoms, molecules and nanoparticles on surfaces – adparticle manipulation. These techniques will be used for the manufacture of nanoscale surface supported functional devices in nanotechnologies such as quantum computing, molecular electronics and lab-on-achip, as well as for modifying surfaces to obtain novel optical, electronic, chemical, or mechanical properties. A favourable approach to formation of surface nanostructures is self-assembly. In self-assembly, nanostructures are grown by aggregation of individual adparticles that diffuse by thermally activated processes on the surface. The passive nature of this process means it is generally not suited to formation of arbitrarily shaped structures. The self-assembly of nanostructures at arbitrary positions has been demonstrated, though these have typically required a pre-patterning treatment of the surface using sophisticated techniques such as electron beam lithography. On the other hand, a parallel adparticle manipulation technique would be suited for directing the selfassembly process to occur at arbitrary positions, without the need for pre-patterning the surface. There is at present a lack of techniques for parallel manipulation and redistribution of adparticles to arbitrary positions on the surface. This is an issue that needs to be addressed since these techniques can play an important role in nanotechnology. In this thesis, we propose such a technique – thermal tweezers. In thermal tweezers, adparticles are redistributed by localised heating of the surface. This locally enhances surface diffusion of adparticles so that they rapidly diffuse away from the heated regions. Using this technique, the redistribution of adparticles to form a desired pattern is achieved by heating the surface at specific regions. In this project, we have focussed on the holographic implementation of this approach, where the surface is heated by holographic patterns of interfering pulsed laser beams. This implementation is suitable for the formation of arbitrarily shaped structures; the only condition is that the shape can be produced by holographic means. In the simplest case, the laser pulses are linearly polarised and intersect to form an interference pattern that is a modulation of intensity along a single direction. Strong optical absorption at the intensity maxima of the interference pattern results in approximately a sinusoidal variation of the surface temperature along one direction. The main aim of this research project is to investigate the feasibility of the holographic implementation of thermal tweezers as an adparticle manipulation technique. Firstly, we investigate theoretically the surface diffusion of adparticles in the presence of sinusoidal modulation of the surface temperature. Very strong redistribution of adparticles is predicted when there is strong interaction between the adparticle and the surface, and the amplitude of the temperature modulation is ~100 K. We have proposed a thin metallic film deposited on a glass substrate heated by interfering laser beams (optical wavelengths) as a means of generating very large amplitude of surface temperature modulation. Indeed, we predict theoretically by numerical solution of the thermal conduction equation that amplitude of the temperature modulation on the metallic film can be much greater than 100 K when heated by nanosecond pulses with an energy ~1 mJ. The formation of surface nanostructures of less than 100 nm in width is predicted at optical wavelengths in this implementation of thermal tweezers. Furthermore, we propose a simple extension to this technique where spatial phase shift of the temperature modulation effectively doubles or triples the resolution. At the same time, increased resolution is predicted by reducing the wavelength of the laser pulses. In addition, we present two distinctly different, computationally efficient numerical approaches for theoretical investigation of surface diffusion of interacting adparticles – the Monte Carlo Interaction Method (MCIM) and the random potential well method (RPWM). Using each of these approaches we have investigated thermal tweezers for redistribution of both strongly and weakly interacting adparticles. We have predicted that strong interactions between adparticles can increase the effectiveness of thermal tweezers, by demonstrating practically complete adparticle redistribution into the low temperature regions of the surface. This is promising from the point of view of thermal tweezers applied to directed self-assembly of nanostructures. Finally, we present a new and more efficient numerical approach to theoretical investigation of thermal tweezers of non-interacting adparticles. In this approach, the local diffusion coefficient is determined from solution of the Fokker-Planck equation. The diffusion equation is then solved numerically using the finite volume method (FVM) to directly obtain the probability density of adparticle position. We compare predictions of this approach to those of the Ermak algorithm solution of the Langevin equation, and relatively good agreement is shown at intermediate and high friction. In the low friction regime, we predict and investigate the phenomenon of ‘optimal’ friction and describe its occurrence due to very long jumps of adparticles as they diffuse from the hot regions of the surface. Future research directions, both theoretical and experimental are also discussed.