877 resultados para Non-stationary iterative method
Resumo:
We examine differential equations where nonlinearity is a result of the advection part of the total derivative or the use of quadratic algebraic constraints between state variables (such as the ideal gas law). We show that these types of nonlinearity can be accounted for in the tangent linear model by a suitable choice of the linearization trajectory. Using this optimal linearization trajectory, we show that the tangent linear model can be used to reproduce the exact nonlinear error growth of perturbations for more than 200 days in a quasi-geostrophic model and more than (the equivalent of) 150 days in the Lorenz 96 model. We introduce an iterative method, purely based on tangent linear integrations, that converges to this optimal linearization trajectory. The main conclusion from this article is that this iterative method can be used to account for nonlinearity in estimation problems without using the nonlinear model. We demonstrate this by performing forecast sensitivity experiments in the Lorenz 96 model and show that we are able to estimate analysis increments that improve the two-day forecast using only four backward integrations with the tangent linear model. Copyright © 2011 Royal Meteorological Society
Resumo:
We investigate for 26 OECD economies whether their current account imbalances to GDP are driven by stochastic trends. Regarding bounded stationarity as the more natural counterpart of sustainability, results from Phillips–Perron tests for unit root and bounded unit root processes are contrasted. While the former hint at stationarity of current account imbalances for 12 economies, the latter indicate bounded stationarity for only six economies. Through panel-based test statistics, current account imbalances are diagnosed as bounded non-stationary. Thus, (spurious) rejections of the unit root hypothesis might be due to the existence of bounds reflecting hidden policy controls or financial crises.
Resumo:
This paper demonstrates the impracticality of a comprehensive mathematical definition of the term `drought' which formalises the general qualitative definition that drought is `a deficit of water relative to normal conditions'. Starting from the local water balance, it is shown that a universal description of drought requires reference to water supply, demand and management. The influence of human intervention through water management is shown to be intrinsic to the definition of drought in the universal sense and can only be eliminated in the case of purely meteorological drought. The state of `drought' is shown to be predicated on the existence of climatological norms for a multitude of process specific terms. In general these norms are either difficult to obtain or even non-existent in the non-stationary context of climate change. Such climatological considerations, in conjunction with the difficulty of quantifying human influence, lead to the conclusion that we cannot reasonably expect the existence of any workable generalised objective definition of drought.
Resumo:
Although financial theory rests heavily upon the assumption that asset returns are normally distributed, value indices of commercial real estate display significant departures from normality. In this paper, we apply and compare the properties of two recently proposed regime switching models for value indices of commercial real estate in the US and the UK, both of which relax the assumption that observations are drawn from a single distribution with constant mean and variance. Statistical tests of the models' specification indicate that the Markov switching model is better able to capture the non-stationary features of the data than the threshold autoregressive model, although both represent superior descriptions of the data than the models that allow for only one state. Our results have several implications for theoretical models and empirical research in finance.
Resumo:
Identifying predictability and the corresponding sources for the western North Pacific (WNP) summer climate in the case of non-stationary teleconnections during recent decades benefits for further improvements of long-range prediction on the WNP and East Asian summers. In the past few decades, pronounced increases on the summer sea surface temperature (SST) and associated interannual variability are observed over the tropical Indian Ocean and eastern Pacific around the late 1970s and over the Maritime Continent and western–central Pacific around the early 1990s. These increases are associated with significant enhancements of the interannual variability for the lower-tropospheric wind over the WNP. In this study, we further assess interdecadal changes on the seasonal prediction of the WNP summer anomalies, using May-start retrospective forecasts from the ENSEMBLES multi-model project in the period 1960–2005. It is found that prediction of the WNP summer anomalies exhibits an interdecadal shift with higher prediction skills since the late 1970s, particularly after the early 1990s. Improvements of the prediction skills for SSTs after the late 1970s are mainly found around tropical Indian Ocean and the WNP. The better prediction of the WNP after the late 1970s may arise mainly from the improvement of the SST prediction around the tropical eastern Indian Ocean. The close teleconnections between the tropical eastern Indian Ocean and WNP summer variability work both in the model predictions and observations. After the early 1990s, on the other hand, the improvements are detected mainly around the South China Sea and Philippines for the lower-tropospheric zonal wind and precipitation anomalies, associating with a better description of the SST anomalies around the Maritime Continent. A dipole SST pattern over the Maritime Continent and the central equatorial Pacific Ocean is closely related to the WNP summer anomalies after the early 1990s. This teleconnection mode is quite predictable, which is realistically reproduced by the models, presenting more predictable signals to the WNP summer climate after the early 1990s.
Resumo:
It has been postulated that autism spectrum disorder is underpinned by an ‘atypical connectivity’ involving higher-order association brain regions. To test this hypothesis in a large cohort of adults with autism spectrum disorder we compared the white matter networks of 61 adult males with autism spectrum disorder and 61 neurotypical controls, using two complementary approaches to diffusion tensor magnetic resonance imaging. First, we applied tract-based spatial statistics, a ‘whole brain’ non-hypothesis driven method, to identify differences in white matter networks in adults with autism spectrum disorder. Following this we used a tract-specific analysis, based on tractography, to carry out a more detailed analysis of individual tracts identified by tract-based spatial statistics. Finally, within the autism spectrum disorder group, we studied the relationship between diffusion measures and autistic symptom severity. Tract-based spatial statistics revealed that autism spectrum disorder was associated with significantly reduced fractional anisotropy in regions that included frontal lobe pathways. Tractography analysis of these specific pathways showed increased mean and perpendicular diffusivity, and reduced number of streamlines in the anterior and long segments of the arcuate fasciculus, cingulum and uncinate—predominantly in the left hemisphere. Abnormalities were also evident in the anterior portions of the corpus callosum connecting left and right frontal lobes. The degree of microstructural alteration of the arcuate and uncinate fasciculi was associated with severity of symptoms in language and social reciprocity in childhood. Our results indicated that autism spectrum disorder is a developmental condition associated with abnormal connectivity of the frontal lobes. Furthermore our findings showed that male adults with autism spectrum disorder have regional differences in brain anatomy, which correlate with specific aspects of autistic symptoms. Overall these results suggest that autism spectrum disorder is a condition linked to aberrant developmental trajectories of the frontal networks that persist in adult life.
Resumo:
Lagged correlation analysis is often used to infer intraseasonal dynamical effects but is known to be affected by non-stationarity. We highlight a pronounced quasi-two-year peak in the anomalous zonal wind and eddy momentum flux convergence power spectra in the Southern Hemisphere, which is prima facie evidence for non-stationarity. We then investigate the consequences of this non-stationarity for the Southern Annular Mode and for eddy momentum flux convergence. We argue that positive lagged correlations previously attributed to the existence of an eddy feedback are more plausibly attributed to non-stationary interannual variability external to any potential feedback process in the mid-latitude troposphere. The findings have implications for the diagnosis of feedbacks in both models and re-analysis data as well as for understanding the mechanisms underlying variations in the zonal wind.
Resumo:
Given the high susceptibility of baby spinach leaves to thermal processing, the use of high hydrostatic pressure (HHP) is explored as a non-thermal blanching method. The effects of HHP were compared with thermal blanching by following residual activity of polyphenol oxidases and peroxidases, colour retention, chlorophyll and carotenoids content, antioxidant capacity and total polyphenols content. Spinach subjected to 700 MPa at 20 ºC for 15 min represented the best treatment among the conditions studied due to its balanced effect on target enzymes and quality indices. The latter treatment reduced enzyme activities of polyphenol oxidases and peroxidases by 86.4 and 76.7 %, respectively. Furthermore, leaves did not present changes in colour and an increase by 13.6 % and 15.6 % was found in chlorophyll and carotenoids content, respectively; regarding phytochemical compounds, retentions of 28.2 % of antioxidant capacity and 77.1 % of polyphenols content were found. Results demonstrated that HHP (700 MPa) at room temperature, when compared with thermal treatments, presented better retention of polyphenols, not significantly different chlorophyll and carotenoids content and no perceptible differences in the instrumental colour evaluated through ΔE value; therefore, it can be considered a realistic practical alternative to the widely used thermal blanching.
Resumo:
Renoguanylin (REN) is a recently described member of the guanylin family, which was first isolated from eels and is expressed in intestinal and specially kidney tissues. In the present work we evaluate the effects of REN on the mechanisms of hydrogen transport in rat renal tubules by the stationary microperfusion method. We evaluated the effect of 1 mu M and 10 mu M of renoguanylin (REN) on the reabsorption of bicarbonate in proximal and distal segments and found that there was a significant reduction in bicarbonate reabsorption. In proximal segments, REN promoted a significant effect at both 1 and 10 mu M concentrations. Comparing control and REN concentration of 1 mu M, JHCO(3)(-) . nmol cm(-2) s(-1) -1,76 +/- 0.11(control) x 1,29 +/- 0,08(REN) 10 mu m: P<0.05, was obtained. In distal segments the effect of both concentrations of REN was also effective, being significant e.g. at a concentration of 1 mu M (JHCO(3)(-), nmol cm(-2) s(-1) -0.80 +/- 0.07(control) x 0.60 +/- 0.06(REN) 1 mu m; P<0.05), although at a lower level than in the proximal tubule. Our results suggest that the action of REN on hydrogen transport involves the inhibition of Na(+)/H(+) exchanger and H(+)-ATPase in the luminal membrane of the perfused tubules by a PKG dependent pathway. (c) 2009 Elsevier B.V. All rights reserved.
Resumo:
Conventional procedures employed in the modeling of viscoelastic properties of polymer rely on the determination of the polymer`s discrete relaxation spectrum from experimentally obtained data. In the past decades, several analytical regression techniques have been proposed to determine an explicit equation which describes the measured spectra. With a diverse approach, the procedure herein introduced constitutes a simulation-based computational optimization technique based on non-deterministic search method arisen from the field of evolutionary computation. Instead of comparing numerical results, this purpose of this paper is to highlight some Subtle differences between both strategies and focus on what properties of the exploited technique emerge as new possibilities for the field, In oder to illustrate this, essayed cases show how the employed technique can outperform conventional approaches in terms of fitting quality. Moreover, in some instances, it produces equivalent results With much fewer fitting parameters, which is convenient for computational simulation applications. I-lie problem formulation and the rationale of the highlighted method are herein discussed and constitute the main intended contribution. (C) 2009 Wiley Periodicals, Inc. J Appl Polym Sci 113: 122-135, 2009
Surfactant-nanotube interactions in water and nanotube separation by diameter: atomistic simulations
Resumo:
A non-destructive sorting method to separate single-walled carbon nanotubes (SWNTs) by diameter was recently proposed. By this method, SWNTs are suspended in water by surfactant encapsulation and the separation is carried out by ultracentrifugation in a density gradient. SWNTs of different diameters are distributed according to their densities along the centrifuge tube. A mixture of two anionic surfactants, namely sodium dodecylsulfate (SDS) and sodium cholate (SC), presented the best performance in discriminating nanotubes by diameter. Unexpectedly, small diameter nanotubes are found at the low density part of the centrifuge tube. We present molecular dynamics studies of the water-surfactant-SWNT system to investigate the role of surfactants in the sorting process. We found that surfactants can actually be attracted towards the interior of the nanotube cage, depending on the relationship between the surfactant radius of gyration and the nanotube diameter. The dynamics at room temperature showed that, as the amphiphile moves to the hollow cage, water molecules are dragged together, thereby promoting the nanotube filling. The resulting densities of filled SWNT are in agreement with measured densities.
Resumo:
When modeling real-world decision-theoretic planning problems in the Markov Decision Process (MDP) framework, it is often impossible to obtain a completely accurate estimate of transition probabilities. For example, natural uncertainty arises in the transition specification due to elicitation of MOP transition models from an expert or estimation from data, or non-stationary transition distributions arising from insufficient state knowledge. In the interest of obtaining the most robust policy under transition uncertainty, the Markov Decision Process with Imprecise Transition Probabilities (MDP-IPs) has been introduced to model such scenarios. Unfortunately, while various solution algorithms exist for MDP-IPs, they often require external calls to optimization routines and thus can be extremely time-consuming in practice. To address this deficiency, we introduce the factored MDP-IP and propose efficient dynamic programming methods to exploit its structure. Noting that the key computational bottleneck in the solution of factored MDP-IPs is the need to repeatedly solve nonlinear constrained optimization problems, we show how to target approximation techniques to drastically reduce the computational overhead of the nonlinear solver while producing bounded, approximately optimal solutions. Our results show up to two orders of magnitude speedup in comparison to traditional ""flat"" dynamic programming approaches and up to an order of magnitude speedup over the extension of factored MDP approximate value iteration techniques to MDP-IPs while producing the lowest error of any approximation algorithm evaluated. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The motivation for this thesis work is the need for improving reliability of equipment and quality of service to railway passengers as well as a requirement for cost-effective and efficient condition maintenance management for rail transportation. This thesis work develops a fusion of various machine vision analysis methods to achieve high performance in automation of wooden rail track inspection.The condition monitoring in rail transport is done manually by a human operator where people rely on inference systems and assumptions to develop conclusions. The use of conditional monitoring allows maintenance to be scheduled, or other actions to be taken to avoid the consequences of failure, before the failure occurs. Manual or automated condition monitoring of materials in fields of public transportation like railway, aerial navigation, traffic safety, etc, where safety is of prior importance needs non-destructive testing (NDT).In general, wooden railway sleeper inspection is done manually by a human operator, by moving along the rail sleeper and gathering information by visual and sound analysis for examining the presence of cracks. Human inspectors working on lines visually inspect wooden rails to judge the quality of rail sleeper. In this project work the machine vision system is developed based on the manual visual analysis system, which uses digital cameras and image processing software to perform similar manual inspections. As the manual inspection requires much effort and is expected to be error prone sometimes and also appears difficult to discriminate even for a human operator by the frequent changes in inspected material. The machine vision system developed classifies the condition of material by examining individual pixels of images, processing them and attempting to develop conclusions with the assistance of knowledge bases and features.A pattern recognition approach is developed based on the methodological knowledge from manual procedure. The pattern recognition approach for this thesis work was developed and achieved by a non destructive testing method to identify the flaws in manually done condition monitoring of sleepers.In this method, a test vehicle is designed to capture sleeper images similar to visual inspection by human operator and the raw data for pattern recognition approach is provided from the captured images of the wooden sleepers. The data from the NDT method were further processed and appropriate features were extracted.The collection of data by the NDT method is to achieve high accuracy in reliable classification results. A key idea is to use the non supervised classifier based on the features extracted from the method to discriminate the condition of wooden sleepers in to either good or bad. Self organising map is used as classifier for the wooden sleeper classification.In order to achieve greater integration, the data collected by the machine vision system was made to interface with one another by a strategy called fusion. Data fusion was looked in at two different levels namely sensor-level fusion, feature- level fusion. As the goal was to reduce the accuracy of the human error on the rail sleeper classification as good or bad the results obtained by the feature-level fusion compared to that of the results of actual classification were satisfactory.
Resumo:
Background: Voice processing in real-time is challenging. A drawback of previous work for Hypokinetic Dysarthria (HKD) recognition is the requirement of controlled settings in a laboratory environment. A personal digital assistant (PDA) has been developed for home assessment of PD patients. The PDA offers sound processing capabilities, which allow for developing a module for recognition and quantification HKD. Objective: To compose an algorithm for assessment of PD speech severity in the home environment based on a review synthesis. Methods: A two-tier review methodology is utilized. The first tier focuses on real-time problems in speech detection. In the second tier, acoustics features that are robust to medication changes in Levodopa-responsive patients are investigated for HKD recognition. Keywords such as Hypokinetic Dysarthria , and Speech recognition in real time were used in the search engines. IEEE explorer produced the most useful search hits as compared to Google Scholar, ELIN, EBRARY, PubMed and LIBRIS. Results: Vowel and consonant formants are the most relevant acoustic parameters to reflect PD medication changes. Since relevant speech segments (consonants and vowels) contains minority of speech energy, intelligibility can be improved by amplifying the voice signal using amplitude compression. Pause detection and peak to average power rate calculations for voice segmentation produce rich voice features in real time. Enhancements in voice segmentation can be done by inducing Zero-Crossing rate (ZCR). Consonants have high ZCR whereas vowels have low ZCR. Wavelet transform is found promising for voice analysis since it quantizes non-stationary voice signals over time-series using scale and translation parameters. In this way voice intelligibility in the waveforms can be analyzed in each time frame. Conclusions: This review evaluated HKD recognition algorithms to develop a tool for PD speech home-assessment using modern mobile technology. An algorithm that tackles realtime constraints in HKD recognition based on the review synthesis is proposed. We suggest that speech features may be further processed using wavelet transforms and used with a neural network for detection and quantification of speech anomalies related to PD. Based on this model, patients' speech can be automatically categorized according to UPDRS speech ratings.
Resumo:
Neste trabalho, estendemos, de forma analítica, a formulação LTSN à problemas de transporte unidimensionais sem simetria azimutal. Para este problema, também apresentamos a solução com dependência contínua na variável angular, a partir da qual é estabelecido um método iterativo de solução da equação de transporte unidimensional. Também discutimos como a formulação LTSN é aplicada na resolução de problemas de transporte unidimensionais dependentes do tempo, tanto de forma aproximada pela inversão numérica do fluxo transformado na variável tempo, bem como analiticamente, pela aplicação do método LTSNnas equações nodais. Simulações numéricas e comparações com resultados disponíveis na literatura são apresentadas.