954 resultados para piecewise constant argument


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lyapunov stability for a class of differential equation with piecewise constant argument (EPCA) is considered by means of the stability of a discrete equation. Applications to some nonlinear autonomous equations are given improving some linear known cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dichotomic maps are considered by means of the stability and asymptotic stability of the null solution of a class of differential equations with argument [t] via associated discrete equations, where [.] designates the greatest integer function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dichotomic maps are considered by means of the stability of the null solution of a class of differential equations with piecewise constant argument via associated discrete equations. Copyright © 2008 Watam Press.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the study of the basic theory of existence, uniqueness and continuation of solutions of di®erential equations with piecewise constant argument. Results about asymptotic stability of the equation x(t) =-bx(t) + f(x([t])) with argu- ment [t], where [t] designates the greatest integer function, are established by means of dichotomic maps. Other example is given to illustrate the application of the method. Copyright © 2011 Watam Press.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we demonstrate a simple and novel illumination model that can be used for illumination invariant facial recognition. This model requires no prior knowledge of the illumination conditions and can be used when there is only a single training image per-person. The proposed illumination model separates the effects of illumination over a small area of the face into two components; an additive component modelling the mean illumination and a multiplicative component, modelling the variance within the facial area. Illumination invariant facial recognition is performed in a piecewise manner, by splitting the face image into blocks, then normalizing the illumination within each block based on the new lighting model. The assumptions underlying this novel lighting model have been verified on the YaleB face database. We show that magnitude 2D Fourier features can be used as robust facial descriptors within the new lighting model. Using only a single training image per-person, our new method achieves high (in most cases 100%) identification accuracy on the YaleB, extended YaleB and CMU-PIE face databases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Removing noise from piecewise constant (PWC) signals is a challenging signal processing problem arising in many practical contexts. For example, in exploration geosciences, noisy drill hole records need to be separated into stratigraphic zones, and in biophysics, jumps between molecular dwell states have to be extracted from noisy fluorescence microscopy signals. Many PWC denoising methods exist, including total variation regularization, mean shift clustering, stepwise jump placement, running medians, convex clustering shrinkage and bilateral filtering; conventional linear signal processing methods are fundamentally unsuited. This paper (part I, the first of two) shows that most of these methods are associated with a special case of a generalized functional, minimized to achieve PWC denoising. The minimizer can be obtained by diverse solver algorithms, including stepwise jump placement, convex programming, finite differences, iterated running medians, least angle regression, regularization path following and coordinate descent. In the second paper, part II, we introduce novel PWC denoising methods, and comparisons between these methods performed on synthetic and real signals, showing that the new understanding of the problem gained in part I leads to new methods that have a useful role to play.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Removing noise from signals which are piecewise constant (PWC) is a challenging signal processing problem that arises in many practical scientific and engineering contexts. In the first paper (part I) of this series of two, we presented background theory building on results from the image processing community to show that the majority of these algorithms, and more proposed in the wider literature, are each associated with a special case of a generalized functional, that, when minimized, solves the PWC denoising problem. It shows how the minimizer can be obtained by a range of computational solver algorithms. In this second paper (part II), using this understanding developed in part I, we introduce several novel PWC denoising methods, which, for example, combine the global behaviour of mean shift clustering with the local smoothing of total variation diffusion, and show example solver algorithms for these new methods. Comparisons between these methods are performed on synthetic and real signals, revealing that our new methods have a useful role to play. Finally, overlaps between the generalized methods of these two papers and others such as wavelet shrinkage, hidden Markov models, and piecewise smooth filtering are touched on.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The asymptotic stability of the null solution of the equation ẋ(t) = -a(t)x(t)+b(t)x([t]) with argument [t], where [t] designates the greatest integer function, is studied by means of dichotomic maps. © 2010 Academic Publications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper discusses control strategies adapted for practical implementation and efficient motion of underwater vehicles. These trajectories are piecewise constant thrust arcs with few actuator switchings. We provide the numerical algorithm which computes the time efficient trajectories parameterized by the switching times. We discuss both the theoretical analysis and experimental implementation results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The usual assumption made in time minimising transportation problem is that the time for transporting a positive amount in a route is independent of the actual amount transported in that route. In this paper we make a more general and natural assumption that the time depends on the actual amount transported. We assume that the time function for each route is an increasing piecewise constant function. Four algorithms - (1) a threshold algorithm, (2) an upper bounding technique, (3) a primal dual approach, and (4) a branch and bound algorithm - are presented to solve the given problem. A method is also given to compute the minimum bottle-neck shipment corresponding to the optimal time. A numerical example is solved illustrating the algorithms presented in this paper.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Segmentation is a data mining technique yielding simplified representations of sequences of ordered points. A sequence is divided into some number of homogeneous blocks, and all points within a segment are described by a single value. The focus in this thesis is on piecewise-constant segments, where the most likely description for each segment and the most likely segmentation into some number of blocks can be computed efficiently. Representing sequences as segmentations is useful in, e.g., storage and indexing tasks in sequence databases, and segmentation can be used as a tool in learning about the structure of a given sequence. The discussion in this thesis begins with basic questions related to segmentation analysis, such as choosing the number of segments, and evaluating the obtained segmentations. Standard model selection techniques are shown to perform well for the sequence segmentation task. Segmentation evaluation is proposed with respect to a known segmentation structure. Applying segmentation on certain features of a sequence is shown to yield segmentations that are significantly close to the known underlying structure. Two extensions to the basic segmentation framework are introduced: unimodal segmentation and basis segmentation. The former is concerned with segmentations where the segment descriptions first increase and then decrease, and the latter with the interplay between different dimensions and segments in the sequence. These problems are formally defined and algorithms for solving them are provided and analyzed. Practical applications for segmentation techniques include time series and data stream analysis, text analysis, and biological sequence analysis. In this thesis segmentation applications are demonstrated in analyzing genomic sequences.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Väitöskirja koostuu neljästä esseestä, joissa tutkitaan empiirisen työntaloustieteen kysymyksiä. Ensimmäinen essee tarkastelee työttömyysturvan tason vaikutusta työllistymiseen Suomessa. Vuonna 2003 ansiosidonnaista työttömyysturvaa korotettiin työntekijöille, joilla on pitkä työhistoria. Korotus oli keskimäärin 15 % ja se koski ensimmäistä 150 työttömyyspäivää. Tutkimuksessa arvioidaan korotuksen vaikutus vertailemalla työllistymisen todennäköisyyksiä korotuksen saaneen ryhmän ja vertailuryhmän välillä ennen uudistusta ja sen jälkeen. Tuloksien perusteella työttömyysturvan korotus laski työllistymisen todennäköisyyttä merkittävästi, keskimäärin noin 16 %. Korotuksen vaikutus on suurin työttömyyden alussa ja se katoaa kun oikeus korotettuun ansiosidonnaiseen päättyy. Toinen essee tutkii työttömyyden pitkän aikavälin kustannuksia Suomessa keskittyen vuosien 1991 – 1993 syvään lamaan. Laman aikana toimipaikkojen sulkeminen lisääntyi paljon ja työttömyysaste nousi yli 13 prosenttiyksikköä. Tutkimuksessa verrataan laman aikana toimipaikan sulkemisen vuoksi työttömäksi jääneitä parhaassa työiässä olevia miehiä työllisinä pysyneisiin. Työttömyyden vaikutusta tarkastellaan kuuden vuoden seurantajaksolla. Vuonna 1999 työttömyyttä laman aikana kokeneen ryhmän vuosiansiot olivat keskimäärin 25 % alemmat kuin vertailuryhmässä. Tulojen menetys johtui sekä alhaisemmasta työllisyydestä että palkkatasosta. Kolmannessa esseessä tarkastellaan Suomen 1990-luvun alun laman aiheuttamaa työttömyysongelmaa tutkimalla työttömyyden kestoon vaikuttavia tekijöitä yksilötasolla. Kiinnostuksen kohteena on työttömyyden rakenteen ja työn kysynnän muutoksien vaikutus keskimääräiseen kestoon. Usein oletetaan, että laman seurauksena työttömäksi jää keskimääräistä huonommin työllistyviä henkilöitä, jolloin se itsessään pidentäisi keskimääräistä työttömyyden kestoa. Tuloksien perusteella makrotason kysyntävaikutus oli keskeinen työttömyyden keston kannalta ja rakenteen muutoksilla oli vain pieni kestoa lisäävä vaikutus laman aikana. Viimeisessä esseessä tutkitaan suhdannevaihtelun vaikutusta työpaikkaonnettomuuksien esiintymiseen. Tutkimuksessa käytetään ruotsalaista yksilötason sairaalahoitoaineistoa, joka on yhdistetty populaatiotietokantaan. Aineiston avulla voidaan tutkia vaihtoehtoisia selityksiä onnettomuuksien lisääntymiselle noususuhdanteessa, minkä on esitetty johtuvan esim. stressin tai kiireen vaikutuksesta. Tuloksien perusteella työpaikkaonnettomuudet ovat syklisiä, mutta vain tiettyjen ryhmien kohdalla. Työvoiman rakenteen vaihtelu saattaa selittää osan naisten onnettomuuksien syklisyydestä. Miesten kohdalla vain vähemmän vakavat onnettomuudet ovat syklisiä, mikä saattaa johtua strategisesta käyttäytymisestä.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we exploit the idea of decomposition to match buyers and sellers in an electronic exchange for trading large volumes of homogeneous goods, where the buyers and sellers specify marginal-decreasing piecewise constant price curves to capture volume discounts. Such exchanges are relevant for automated trading in many e-business applications. The problem of determining winners and Vickrey prices in such exchanges is known to have a worst-case complexity equal to that of as many as (1 + m + n) NP-hard problems, where m is the number of buyers and n is the number of sellers. Our method proposes the overall exchange problem to be solved as two separate and simpler problems: 1) forward auction and 2) reverse auction, which turns out to be generalized knapsack problems. In the proposed approach, we first determine the quantity of units to be traded between the sellers and the buyers using fast heuristics developed by us. Next, we solve a forward auction and a reverse auction using fully polynomial time approximation schemes available in the literature. The proposed approach has worst-case polynomial time complexity. and our experimentation shows that the approach produces good quality solutions to the problem. Note to Practitioners- In recent times, electronic marketplaces have provided an efficient way for businesses and consumers to trade goods and services. The use of innovative mechanisms and algorithms has made it possible to improve the efficiency of electronic marketplaces by enabling optimization of revenues for the marketplace and of utilities for the buyers and sellers. In this paper, we look at single-item, multiunit electronic exchanges. These are electronic marketplaces where buyers submit bids and sellers ask for multiple units of a single item. We allow buyers and sellers to specify volume discounts using suitable functions. Such exchanges are relevant for high-volume business-to-business trading of standard products, such as silicon wafers, very large-scale integrated chips, desktops, telecommunications equipment, commoditized goods, etc. The problem of determining winners and prices in such exchanges is known to involve solving many NP-hard problems. Our paper exploits the familiar idea of decomposition, uses certain algorithms from the literature, and develops two fast heuristics to solve the problem in a near optimal way in worst-case polynomial time.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We investigate the effect of a prescribed tangential velocity on the drag force on a circular cylinder in a spanwise uniform cross flow. Using a combination of theoretical and numerical techniques we make an attempt at determining the optimal tangential velocity profiles which will reduce the drag force acting on the cylindrical body while minimizing the net power consumption characterized through a non-dimensional power loss coefficient (C-PL). A striking conclusion of our analysis is that the tangential velocity associated with the potential flow, which completely suppresses the drag force, is not optimal for both small and large, but finite Reynolds number. When inertial effects are negligible (R e << 1), theoretical analysis based on two-dimensional Oseen equations gives us the optimal tangential velocity profile which leads to energetically efficient drag reduction. Furthermore, in the limit of zero Reynolds number (Re -> 0), minimum power loss is achieved for a tangential velocity profile corresponding to a shear-free perfect slip boundary. At finite Re, results from numerical simulations indicate that perfect slip is not optimum and a further reduction in drag can be achieved for reduced power consumption. A gradual increase in the strength of a tangential velocity which involves only the first reflectionally symmetric mode leads to a monotonic reduction in drag and eventual thrust production. Simulations reveal the existence of an optimal strength for which the power consumption attains a minima. At a Reynolds number of 100, minimum value of the power loss coefficient (C-PL = 0.37) is obtained when the maximum in tangential surface velocity is about one and a half times the free stream uniform velocity corresponding to a percentage drag reduction of approximately 77 %; C-PL = 0.42 and 0.50 for perfect slip and potential flow cases, respectively. Our results suggest that potential flow tangential velocity enables energetically efficient propulsion at all Reynolds numbers but optimal drag reduction only for Re -> infinity. The two-dimensional strategy of reducing drag while minimizing net power consumption is shown to be effective in three dimensions via numerical simulation of flow past an infinite circular cylinder at a Reynolds number of 300. Finally a strategy of reducing drag, suitable for practical implementation and amenable to experimental testing, through piecewise constant tangential velocities distributed along the cylinder periphery is proposed and analysed.