917 resultados para dynamic time warping
Resumo:
Time-lagged responses of biological variables to landscape modifications are widely recognized, but rarely considered in ecological studies. In order to test for the existence of time-lags in the response of trees, small mammals, birds and frogs to changes in fragment area and connectivity, we studied a fragmented and highly dynamic landscape in the Atlantic forest region. We also investigated the biological correlates associated with differential responses among taxonomic groups. Species richness and abundance for four taxonomic groups were measured in 21 secondary forest fragments during the same period (2000-2002), following a standardized protocol. Data analyses were based on power regressions and model selection procedures. The model inputs included present (2000) and past (1962, 1981) fragment areas and connectivity, as well as observed changes in these parameters. Although past landscape structure was particularly relevant for trees, all taxonomic groups (except small mammals) were affected by landscape dynamics, exhibiting a time-lagged response. Furthermore, fragment area was more important for species groups with lower dispersal capacity, while species with higher dispersal ability had stronger responses to connectivity measures. Although these secondary forest fragments still maintain a large fraction of their original biodiversity, the delay in biological response combined with high rates of deforestation and fast forest regeneration imply in a reduction in the average age of the forest. This also indicates that future species losses are likely, especially those that are more strictly-forest dwellers. Conservation actions should be implemented to reduce species extinction, to maintain old-growth forests and to favour the regeneration process. Our results demonstrate that landscape history can strongly affect the present distribution pattern of species in fragmented landscapes, and should be considered in conservation planning. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
We present a minor but essential modification to the CODEX 1D-MAS exchange experiment. The new CONTRA method, which requires minor changes of the original sequence only, has advantages over the previously introduced S-CODEX, since it is less sensitive to artefacts caused by finite pulse lengths. The performance of this variant, including the finite pulse effect, was confirmed by SIMPSON calculations and demonstrated on a number of dynamic systems. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
The aim of this work was to evaluate the effect of the storage time on the thermal properties of triethylene glycol dimethacrylate/2,2-bis[4-(2-hydroxy-3-methacryloxy-prop-1-oxy)-phenyl]propane bisphenyl-alpha-glycidyl ether dimethacrylate (TB) copolymers used in formulations of dental resins after photopolymerization. The TB copolymers were prepared by photopolymerization with an Ultrablue IS light-emitting diode, stored in the dark for 160 days at 37 degrees C, and characterized with differential scanning calorimetry (DSC), dynamic mechanical analysis (DMA), and Fourier transform infrared spectroscopy with attenuated total reflection. DSC curves indicated the presence of an exothermic peak, confirming that the reaction was not completed during the photopolymerization process. This exothermic peak became smaller as a function of the storage time and was shifted at higher temperatures. In DMA studies, a plot of the loss tangent versus the temperature initially showed the presence of two well-defined peaks. The presence of both peaks confirmed the presence of residual monomers that were not converted during the photopolymerization process. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 112: 679-684, 2009
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Dynamic light scattering (DLS), time-resolved fluorescence quenching (TRFQ), and isothermal titration microcalorimetry have been used to show that, in dilute solution, low molecular weight poly(ethylene glycol) (PEG, M-w = 12 kDa) interacts with the nonionic surfactant octaethylene glycol n-dodecyl monoether, C12E8, to form a complex. Whereas the relaxation time distributions for the binary C12E8/water and PEG/water systems are unimodal, in the ternary mixtures they may be either uni- or bimodal depending on the relative concentrations of the components. At low concentrations of PEG or surfactant, the components of the relaxation time distribution are unresolvable, but the distribution becomes bimodal at higher concentrations of either polymer or surfactant. For the ternary system in excess surfactant, we ascribe, on the basis of the changes in apparent hydrodynamic radii and the scattered intensities, the fast mode to a single micelle, the surface of which is associated with the polymer and the slow mode to a similar complex but containing two or three micelles per PEG chain. Titration microcalorimetry results show that the interaction between C12E8, and PEG is exothermic and about 1 kJ mol(-1) at concentrations higher than the CMC of C12E8. The aggregation number, obtained by TRFQ, is roughly constant when either the PEG or the C12E8 concentration is increased at a given concentration of the second component, owing to the increasing amount of surfactant micelles inside the complex.
Resumo:
We consider an infinite horizon optimal impulsive control problems for which a given cost function is minimized by choosing control strategies driving the state to a point in a given closed set C ∞. We present necessary conditions of optimality in the form of a maximum principle for which the boundary condition of the adjoint variable is such that non-degeneracy due to the fact that the time horizon is infinite is ensured. These conditions are given for conventional systems in a first instance and then for impulsive control problems. They are proved by considering a family of approximating auxiliary interval conventional (without impulses) optimal control problems defined on an increasing sequence of finite time intervals. As far as we know, results of this kind have not been derived previously. © 2010 IFAC.
Resumo:
We provide some properties for absolutely continuous functions in time scales. Then we consider a class of dynamical inclusions in time scales and extend to this class a convergence result of a sequence of almost inclusion trajectories to a limit which is actually a trajectory of the inclusion in question. We also introduce the so called Euler solution to dynamical systems in time scales and prove its existence. A combination of the existence of Euler solutions with the compactness type result described above ensures the existence of an actual trajectory for the dynamical inclusion when the setvalued vector field is nonempty, compact, convex and has closed graph. © 2012 Springer-Verlag.
Resumo:
Micro-electromechanical systems (MEMS) are micro scale devices that are able to convert electrical energy into mechanical energy or vice versa. In this paper, the mathematical model of an electronic circuit of a resonant MEMS mass sensor, with time-periodic parametric excitation, was analyzed and controlled by Chebyshev polynomial expansion of the Picard interaction and Lyapunov-Floquet transformation, and by Optimal Linear Feedback Control (OLFC). Both controls consider the union of feedback and feedforward controls. The feedback control obtained by Picard interaction and Lyapunov-Floquet transformation is the first strategy and the optimal control theory the second strategy. Numerical simulations show the efficiency of the two control methods, as well as the sensitivity of each control strategy to parametric errors. Without parametric errors, both control strategies were effective in maintaining the system in the desired orbit. On the other hand, in the presence of parametric errors, the OLFC technique was more robust.
Generalizing the dynamic field theory of spatial cognition across real and developmental time scales
Resumo:
Within cognitive neuroscience, computational models are designed to provide insights into the organization of behavior while adhering to neural principles. These models should provide sufficient specificity to generate novel predictions while maintaining the generality needed to capture behavior across tasks and/or time scales. This paper presents one such model, the Dynamic Field Theory (DFT) of spatial cognition, showing new simulations that provide a demonstration proof that the theory generalizes across developmental changes in performance in four tasks—the Piagetian A-not-B task, a sandbox version of the A-not-B task, a canonical spatial recall task, and a position discrimination task. Model simulations demonstrate that the DFT can accomplish both specificity—generating novel, testable predictions—and generality—spanning multiple tasks across development with a relatively simple developmental hypothesis. Critically, the DFT achieves generality across tasks and time scales with no modification to its basic structure and with a strong commitment to neural principles. The only change necessary to capture development in the model was an increase in the precision of the tuning of receptive fields as well as an increase in the precision of local excitatory interactions among neurons in the model. These small quantitative changes were sufficient to move the model through a set of quantitative and qualitative behavioral changes that span the age range from 8 months to 6 years and into adulthood. We conclude by considering how the DFT is positioned in the literature, the challenges on the horizon for our framework, and how a dynamic field approach can yield new insights into development from a computational cognitive neuroscience perspective.
Resumo:
We study measure functional differential equations and clarify their relation to generalized ordinary differential equations. We show that functional dynamic equations on time scales represent a special case of measure functional differential equations. For both types of equations, we obtain results on the existence and uniqueness of solutions, continuous dependence, and periodic averaging.
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
The new generation of multicore processors opens new perspectives for the design of embedded systems. Multiprocessing, however, poses new challenges to the scheduling of real-time applications, in which the ever-increasing computational demands are constantly flanked by the need of meeting critical time constraints. Many research works have contributed to this field introducing new advanced scheduling algorithms. However, despite many of these works have solidly demonstrated their effectiveness, the actual support for multiprocessor real-time scheduling offered by current operating systems is still very limited. This dissertation deals with implementative aspects of real-time schedulers in modern embedded multiprocessor systems. The first contribution is represented by an open-source scheduling framework, which is capable of realizing complex multiprocessor scheduling policies, such as G-EDF, on conventional operating systems exploiting only their native scheduler from user-space. A set of experimental evaluations compare the proposed solution to other research projects that pursue the same goals by means of kernel modifications, highlighting comparable scheduling performances. The principles that underpin the operation of the framework, originally designed for symmetric multiprocessors, have been further extended first to asymmetric ones, which are subjected to major restrictions such as the lack of support for task migrations, and later to re-programmable hardware architectures (FPGAs). In the latter case, this work introduces a scheduling accelerator, which offloads most of the scheduling operations to the hardware and exhibits extremely low scheduling jitter. The realization of a portable scheduling framework presented many interesting software challenges. One of these has been represented by timekeeping. In this regard, a further contribution is represented by a novel data structure, called addressable binary heap (ABH). Such ABH, which is conceptually a pointer-based implementation of a binary heap, shows very interesting average and worst-case performances when addressing the problem of tick-less timekeeping of high-resolution timers.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.