919 resultados para largest finite-time Lyapunov exponent


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Brazil is the largest sugarcane producer in the world and has a privileged position to attend to national and international market places. To maintain the high production of sugarcane, it is fundamental to improve the forecasting models of crop seasons through the use of alternative technologies, such as remote sensing. Thus, the main purpose of this article is to assess the results of two different statistical forecasting methods applied to an agroclimatic index (the water requirement satisfaction index; WRSI) and the sugarcane spectral response (normalized difference vegetation index; NDVI) registered on National Oceanic and Atmospheric Administration Advanced Very High Resolution Radiometer (NOAA-AVHRR) satellite images. We also evaluated the cross-correlation between these two indexes. According to the results obtained, there are meaningful correlations between NDVI and WRSI with time lags. Additionally, the adjusted model for NDVI presented more accurate results than the forecasting models for WRSI. Finally, the analyses indicate that NDVI is more predictable due to its seasonality and the WRSI values are more variable making it difficult to forecast.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The numerical simulation of flows of highly elastic fluids has been the subject of intense research over the past decades with important industrial applications. Therefore, many efforts have been made to improve the convergence capabilities of the numerical methods employed to simulate viscoelastic fluid flows. An important contribution for the solution of the High-Weissenberg Number Problem has been presented by Fattal and Kupferman [J. Non-Newton. Fluid. Mech. 123 (2004) 281-285] who developed the matrix-logarithm of the conformation tensor technique, henceforth called log-conformation tensor. Its advantage is a better approximation of the large growth of the stress tensor that occur in some regions of the flow and it is doubly beneficial in that it ensures physically correct stress fields, allowing converged computations at high Weissenberg number flows. In this work we investigate the application of the log-conformation tensor to three-dimensional unsteady free surface flows. The log-conformation tensor formulation was applied to solve the Upper-Convected Maxwell (UCM) constitutive equation while the momentum equation was solved using a finite difference Marker-and-Cell type method. The resulting developed code is validated by comparing the log-conformation results with the analytic solution for fully developed pipe flows. To illustrate the stability of the log-conformation tensor approach in solving three-dimensional free surface flows, results from the simulation of the extrudate swell and jet buckling phenomena of UCM fluids at high Weissenberg numbers are presented. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the density matrix renormalization group, we calculated the finite-size corrections of the entanglement alpha-Renyi entropy of a single interval for several critical quantum chains. We considered models with U(1) symmetry such as the spin-1/2 XXZ and spin-1 Fateev-Zamolodchikov models, as well as models with discrete symmetries such as the Ising, the Blume-Capel, and the three-state Potts models. These corrections contain physically relevant information. Their amplitudes, which depend on the value of a, are related to the dimensions of operators in the conformal field theory governing the long-distance correlations of the critical quantum chains. The obtained results together with earlier exact and numerical ones allow us to formulate some general conjectures about the operator responsible for the leading finite-size correction of the alpha-Renyi entropies. We conjecture that the exponent of the leading finite-size correction of the alpha-Renyi entropies is p(alpha) = 2X(epsilon)/alpha for alpha > 1 and p(1) = nu, where X-epsilon denotes the dimensions of the energy operator of the model and nu = 2 for all the models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An overview is given of the limitations of Luttinger liquid theory in describing the real time equilibrium dynamics of critical one-dimensional systems with nonlinear dispersion relation. After exposing the singularities of perturbation theory in band curvature effects that break the Lorentz invariance of the Tomonaga-Luttinger model, the origin of high frequency oscillations in the long time behaviour of correlation functions is discussed. The notion that correlations decay exponentially at finite temperature is challenged by the effects of diffusion in the density-density correlation due to umklapp scattering in lattice models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electrothermomechanical MEMS are essentially microactuators that operate based on the thermoelastic effect induced by the Joule heating of the structure. They can be easily fabricated and require relatively low excitation voltages. However, the actuation time of an electrothermomechanical microdevice is higher than the actuation times related to electrostatic and piezoelectric actuation principles. Thus, in this research, we propose an optimization framework based on the topology optimization method applied to transient problems, to design electrothermomechanical microactuators for response time reduction. The objective is to maximize the integral of the output displacement of the actuator, which is a function of time. The finite element equations that govern the time response of the actuators are provided. Furthermore, the Solid Isotropic Material with Penalization model and Sequential Linear Programming are employed. Finally, a smoothing filter is implemented to control the solution. Results aiming at two distinct applications suggest the proposed approach can provide more than 50% faster actuators. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show, in the imaginary time formalism, that the temperature dependent parts of all the retarded (advanced) amplitudes vanish in the Schwinger model. We trace this behavior to the CPT invariance of the theory and give a physical interpretation of this result in terms of forward scattering amplitudes of on-shell thermal particles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gravity Recovery and Climate Experiment (GRACE) mission is dedicated to measuring temporal variations of the Earth's gravity field. In this study, the Stokes coefficients made available by Groupe de Recherche en Géodésie Spatiale (GRGS) at a 10-day interval were converted into equivalent water height (EWH) for a ~4-year period in the Amazon basin (from July-2002 to May-2006). The seasonal amplitudes of EWH signal are the largest on the surface of Earth and reach ~ 1250mm at that basin's center. Error budget represents ~130 mm of EWH, including formal errors on Stokes coefficient, leakage errors (12 ~ 21 mm) and spectrum truncation (10 ~ 15 mm). Comparison between in situ river level time series measured at 233 ground-based hydrometric stations (HS) in the Amazon basin and vertically-integrated EWH derived from GRACE is carried out in this paper. Although EWH and HS measure different water bodies, in most of the cases a high correlation (up to ~80%) is detected between the HS series and EWH series at the same site. This correlation allows adjusting linear relationships between in situ and GRACE-based series for the major tributaries of the Amazon river. The regression coefficients decrease from up to down stream along the rivers reaching the theoretical value 1 at the Amazon's mouth in the Atlantic Ocean. The variation of the regression coefficients versus the distance from estuary is analysed for the largest rivers in the basin. In a second step, a classification of the proportionality between in situ and GRACE time-series is proposed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ubiquity of time series data across almost all human endeavors has produced a great interest in time series data mining in the last decade. While dozens of classification algorithms have been applied to time series, recent empirical evidence strongly suggests that simple nearest neighbor classification is exceptionally difficult to beat. The choice of distance measure used by the nearest neighbor algorithm is important, and depends on the invariances required by the domain. For example, motion capture data typically requires invariance to warping, and cardiology data requires invariance to the baseline (the mean value). Similarly, recent work suggests that for time series clustering, the choice of clustering algorithm is much less important than the choice of distance measure used.In this work we make a somewhat surprising claim. There is an invariance that the community seems to have missed, complexity invariance. Intuitively, the problem is that in many domains the different classes may have different complexities, and pairs of complex objects, even those which subjectively may seem very similar to the human eye, tend to be further apart under current distance measures than pairs of simple objects. This fact introduces errors in nearest neighbor classification, where some complex objects may be incorrectly assigned to a simpler class. Similarly, for clustering this effect can introduce errors by “suggesting” to the clustering algorithm that subjectively similar, but complex objects belong in a sparser and larger diameter cluster than is truly warranted.We introduce the first complexity-invariant distance measure for time series, and show that it generally produces significant improvements in classification and clustering accuracy. We further show that this improvement does not compromise efficiency, since we can lower bound the measure and use a modification of triangular inequality, thus making use of most existing indexing and data mining algorithms. We evaluate our ideas with the largest and most comprehensive set of time series mining experiments ever attempted in a single work, and show that complexity-invariant distance measures can produce improvements in classification and clustering in the vast majority of cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with an investigation of combinatorial and robust optimisation models to solve railway problems. Railway applications represent a challenging area for operations research. In fact, most problems in this context can be modelled as combinatorial optimisation problems, in which the number of feasible solutions is finite. Yet, despite the astonishing success in the field of combinatorial optimisation, the current state of algorithmic research faces severe difficulties with highly-complex and data-intensive applications such as those dealing with optimisation issues in large-scale transportation networks. One of the main issues concerns imperfect information. The idea of Robust Optimisation, as a way to represent and handle mathematically systems with not precisely known data, dates back to 1970s. Unfortunately, none of those techniques proved to be successfully applicable in one of the most complex and largest in scale (transportation) settings: that of railway systems. Railway optimisation deals with planning and scheduling problems over several time horizons. Disturbances are inevitable and severely affect the planning process. Here we focus on two compelling aspects of planning: robust planning and online (real-time) planning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, a strategy to model the behavior of fluids and their interaction with deformable bodies is proposed. The fluid domain is modeled by using the lattice Boltzmann method, thus analyzing the fluid dynamics by a mesoscopic point of view. It has been proved that the solution provided by this method is equivalent to solve the Navier-Stokes equations for an incompressible flow with a second-order accuracy. Slender elastic structures idealized through beam finite elements are used. Large displacements are accounted for by using the corotational formulation. Structural dynamics is computed by using the Time Discontinuous Galerkin method. Therefore, two different solution procedures are used, one for the fluid domain and the other for the structural part, respectively. These two solvers need to communicate and to transfer each other several information, i.e. stresses, velocities, displacements. In order to guarantee a continuous, effective, and mutual exchange of information, a coupling strategy, consisting of three different algorithms, has been developed and numerically tested. In particular, the effectiveness of the three algorithms is shown in terms of interface energy artificially produced by the approximate fulfilling of compatibility and equilibrium conditions at the fluid-structure interface. The proposed coupled approach is used in order to solve different fluid-structure interaction problems, i.e. cantilever beams immersed in a viscous fluid, the impact of the hull of the ship on the marine free-surface, blood flow in a deformable vessels, and even flapping wings simulating the take-off of a butterfly. The good results achieved in each application highlight the effectiveness of the proposed methodology and of the C++ developed software to successfully approach several two-dimensional fluid-structure interaction problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Die Flachwassergleichungen (SWE) sind ein hyperbolisches System von Bilanzgleichungen, die adäquate Approximationen an groß-skalige Strömungen der Ozeane, Flüsse und der Atmosphäre liefern. Dabei werden Masse und Impuls erhalten. Wir unterscheiden zwei charakteristische Geschwindigkeiten: die Advektionsgeschwindigkeit, d.h. die Geschwindigkeit des Massentransports, und die Geschwindigkeit von Schwerewellen, d.h. die Geschwindigkeit der Oberflächenwellen, die Energie und Impuls tragen. Die Froude-Zahl ist eine Kennzahl und ist durch das Verhältnis der Referenzadvektionsgeschwindigkeit zu der Referenzgeschwindigkeit der Schwerewellen gegeben. Für die oben genannten Anwendungen ist sie typischerweise sehr klein, z.B. 0.01. Zeit-explizite Finite-Volume-Verfahren werden am öftersten zur numerischen Berechnung hyperbolischer Bilanzgleichungen benutzt. Daher muss die CFL-Stabilitätsbedingung eingehalten werden und das Zeitinkrement ist ungefähr proportional zu der Froude-Zahl. Deswegen entsteht bei kleinen Froude-Zahlen, etwa kleiner als 0.2, ein hoher Rechenaufwand. Ferner sind die numerischen Lösungen dissipativ. Es ist allgemein bekannt, dass die Lösungen der SWE gegen die Lösungen der Seegleichungen/ Froude-Zahl Null SWE für Froude-Zahl gegen Null konvergieren, falls adäquate Bedingungen erfüllt sind. In diesem Grenzwertprozess ändern die Gleichungen ihren Typ von hyperbolisch zu hyperbolisch.-elliptisch. Ferner kann bei kleinen Froude-Zahlen die Konvergenzordnung sinken oder das numerische Verfahren zusammenbrechen. Insbesondere wurde bei zeit-expliziten Verfahren falsches asymptotisches Verhalten (bzgl. der Froude-Zahl) beobachtet, das diese Effekte verursachen könnte.Ozeanographische und atmosphärische Strömungen sind typischerweise kleine Störungen eines unterliegenden Equilibriumzustandes. Wir möchten, dass numerische Verfahren für Bilanzgleichungen gewisse Equilibriumzustände exakt erhalten, sonst können künstliche Strömungen vom Verfahren erzeugt werden. Daher ist die Quelltermapproximation essentiell. Numerische Verfahren die Equilibriumzustände erhalten heißen ausbalanciert.rnrnIn der vorliegenden Arbeit spalten wir die SWE in einen steifen, linearen und einen nicht-steifen Teil, um die starke Einschränkung der Zeitschritte durch die CFL-Bedingung zu umgehen. Der steife Teil wird implizit und der nicht-steife explizit approximiert. Dazu verwenden wir IMEX (implicit-explicit) Runge-Kutta und IMEX Mehrschritt-Zeitdiskretisierungen. Die Raumdiskretisierung erfolgt mittels der Finite-Volumen-Methode. Der steife Teil wird mit Hilfe von finiter Differenzen oder au eine acht mehrdimensional Art und Weise approximniert. Zur mehrdimensionalen Approximation verwenden wir approximative Evolutionsoperatoren, die alle unendlich viele Informationsausbreitungsrichtungen berücksichtigen. Die expliziten Terme werden mit gewöhnlichen numerischen Flüssen approximiert. Daher erhalten wir eine Stabilitätsbedingung analog zu einer rein advektiven Strömung, d.h. das Zeitinkrement vergrößert um den Faktor Kehrwert der Froude-Zahl. Die in dieser Arbeit hergeleiteten Verfahren sind asymptotisch erhaltend und ausbalanciert. Die asymptotischer Erhaltung stellt sicher, dass numerische Lösung das "korrekte" asymptotische Verhalten bezüglich kleiner Froude-Zahlen besitzt. Wir präsentieren Verfahren erster und zweiter Ordnung. Numerische Resultate bestätigen die Konvergenzordnung, so wie Stabilität, Ausbalanciertheit und die asymptotische Erhaltung. Insbesondere beobachten wir bei machen Verfahren, dass die Konvergenzordnung fast unabhängig von der Froude-Zahl ist.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Groundwater represents one of the most important resources of the world and it is essential to prevent its pollution and to consider remediation intervention in case of contamination. According to the scientific community the characterization and the management of the contaminated sites have to be performed in terms of contaminant fluxes and considering their spatial and temporal evolution. One of the most suitable approach to determine the spatial distribution of pollutant and to quantify contaminant fluxes in groundwater is using control panels. The determination of contaminant mass flux, requires measurement of contaminant concentration in the moving phase (water) and velocity/flux of the groundwater. In this Master Thesis a new solute flux mass measurement approach, based on an integrated control panel type methodology combined with the Finite Volume Point Dilution Method (FVPDM), for the monitoring of transient groundwater fluxes, is proposed. Moreover a new adsorption passive sampler, which allow to capture the variation of solute concentration with time, is designed. The present work contributes to the development of this approach on three key points. First, the ability of the FVPDM to monitor transient groundwater fluxes was verified during a step drawdown test at the experimental site of Hermalle Sous Argentau (Belgium). The results showed that this method can be used, with optimal results, to follow transient groundwater fluxes. Moreover, it resulted that performing FVPDM, in several piezometers, during a pumping test allows to determine the different flow rates and flow regimes that can occurs in the various parts of an aquifer. The second field test aiming to determine the representativity of a control panel for measuring mass flus in groundwater underlined that wrong evaluations of Darcy fluxes and discharge surfaces can determine an incorrect estimation of mass fluxes and that this technique has to be used with precaution. Thus, a detailed geological and hydrogeological characterization must be conducted, before applying this technique. Finally, the third outcome of this work concerned laboratory experiments. The test conducted on several type of adsorption material (Oasis HLB cartridge, TDS-ORGANOSORB 10 and TDS-ORGANOSORB 10-AA), in order to determine the optimum medium to dimension the passive sampler, highlighted the necessity to find a material with a reversible adsorption tendency to completely satisfy the request of the new passive sampling technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electric power grids throughout the world suffer from serious inefficiencies associated with under-utilization due to demand patterns, engineering design and load following approaches in use today. These grids consume much of the world’s energy and represent a large carbon footprint. From material utilization perspectives significant hardware is manufactured and installed for this infrastructure often to be used at less than 20-40% of its operational capacity for most of its lifetime. These inefficiencies lead engineers to require additional grid support and conventional generation capacity additions when renewable technologies (such as solar and wind) and electric vehicles are to be added to the utility demand/supply mix. Using actual data from the PJM [PJM 2009] the work shows that consumer load management, real time price signals, sensors and intelligent demand/supply control offer a compelling path forward to increase the efficient utilization and carbon footprint reduction of the world’s grids. Underutilization factors from many distribution companies indicate that distribution feeders are often operated at only 70-80% of their peak capacity for a few hours per year, and on average are loaded to less than 30-40% of their capability. By creating strong societal connections between consumers and energy providers technology can radically change this situation. Intelligent deployment of smart sensors, smart electric vehicles, consumer-based load management technology very high saturations of intermittent renewable energy supplies can be effectively controlled and dispatched to increase the levels of utilization of existing utility distribution, substation, transmission, and generation equipment. The strengthening of these technology, society and consumer relationships requires rapid dissemination of knowledge (real time prices, costs & benefit sharing, demand response requirements) in order to incentivize behaviors that can increase the effective use of technological equipment that represents one of the largest capital assets modern society has created.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To compare clinical outcomes after laparoscopic cholecystectomy (LC) for acute cholecystitis performed at various time-points after hospital admission. Background: Symptomatic gallstones represent an important public health problem with LC the treatment of choice. LC is increasingly offered for acute cholecystitis, however, the optimal time-point for LC in this setting remains a matter of debate. Methods: Analysis was based on the prospective database of the Swiss Association of Laparoscopic and Thoracoscopic Surgery and included patients undergoing emergency LC for acute cholecystitis between 1995 and 2006, grouped according to the time-points of LC since hospital admission (admission day (d0), d1, d2, d3, d4/5, d ≥6). Linear and generalized linear regression models assessed the effect of timing of LC on intra- or postoperative complications, conversion and reoperation rates and length of postoperative hospital stay. Results: Of 4113 patients, 52.8% were female, median age was 59.8 years. Delaying LC resulted in significantly higher conversion rates (from 11.9% at d0 to 27.9% at d ≥6 days after admission, P < 0.001), surgical postoperative complications (5.7% to 13%, P < 0.001) and re-operation rates (0.9% to 3%, P = 0.007), with a significantly longer postoperative hospital stay (P < 0.001). Conclusions: Delaying LC for acute cholecystitis has no advantages, resulting in significantly increased conversion/re-operation rate, postoperative complications and longer postoperative hospital stay. This investigation—one of the largest in the literature—provides compelling evidence that acute cholecystitis merits surgery within 48 hours of hospital admission if impact on the patient and health care system is to be minimized.