973 resultados para extended Hildebrand solubility approach
Resumo:
Movement disorders (MD) include a group of neurological disorders that involve neuromotor systems. MD can result in several abnormalities ranging from an inability to move, to severe constant and excessive movements. Strokes are a leading cause of disability affecting largely the older people worldwide. Traditional treatments rely on the use of physiotherapy that is partially based on theories and also heavily reliant on the therapists training and past experience. The lack of evidence to prove that one treatment is more effective than any other makes the rehabilitation of stroke patients a difficult task. UL motor re-learning and recovery levels tend to improve with intensive physiotherapy delivery. The need for conclusive evidence supporting one method over the other and the need to stimulate the stroke patient clearly suggest that traditional methods lack high motivational content, as well as objective standardised analytical methods for evaluating a patient's performance and assessment of therapy effectiveness. Despite all the advances in machine mediated therapies, there is still a need to improve therapy tools. This chapter describes a new approach to robot assisted neuro-rehabilitation for upper limb rehabilitation. Gentle/S introduces a new approach on the integration of appropriate haptic technologies to high quality virtual environments, so as to deliver challenging and meaningful therapies to people with upper limb impairment in consequence of a stroke. The described approach can enhance traditional therapy tools, provide therapy "on demand" and can present accurate objective measurements of a patient's progression. Our recent studies suggest the use of tele-presence and VR-based systems can potentially motivate patients to exercise for longer periods of time. Two identical prototypes have undergone extended clinical trials in the UK and Ireland with a cohort of 30 stroke subjects. From the lessons learnt with the Gentle/S approach, it is clear also that high quality therapy devices of this nature have a role in future delivery of stroke rehabilitation, and machine mediated therapies should be available to patient and his/her clinical team from initial hospital admission, through to long term placement in the patient's home following hospital discharge.
Resumo:
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. In this paper we show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied with great success to analyze the climatic response to general forcings. The crucial value of the Ruelle theory lies in the fact that it allows to compute the response of the system in terms of expectation values of explicit and computable functions of the phase space averaged over the invariant measure of the unperturbed state. We choose as test bed a classical version of the Lorenz 96 model, which, in spite of its simplicity, has a well-recognized prototypical value as it is a spatially extended one-dimensional model and presents the basic ingredients, such as dissipation, advection and the presence of an external forcing, of the actual atmosphere. We recapitulate the main aspects of the general response theory and propose some new general results. We then analyze the frequency dependence of the response of both local and global observables to perturbations having localized as well as global spatial patterns. We derive analytically several properties of the corresponding susceptibilities, such as asymptotic behavior, validity of Kramers-Kronig relations, and sum rules, whose main ingredient is the causality principle. We show that all the coefficients of the leading asymptotic expansions as well as the integral constraints can be written as linear function of parameters that describe the unperturbed properties of the system, such as its average energy. Some newly obtained empirical closure equations for such parameters allow to define such properties as an explicit function of the unperturbed forcing parameter alone for a general class of chaotic Lorenz 96 models. We then verify the theoretical predictions from the outputs of the simulations up to a high degree of precision. The theory is used to explain differences in the response of local and global observables, to define the intensive properties of the system, which do not depend on the spatial resolution of the Lorenz 96 model, and to generalize the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable for finite as well as infinite time. Finally, we propose a simple yet general methodology to study general Climate Change problems on virtually any time scale by resorting to only well selected simulations, and by taking full advantage of ensemble methods. The specific case of globally averaged surface temperature response to a general pattern of change of the CO2 concentration is discussed. We believe that the proposed approach may constitute a mathematically rigorous and practically very effective way to approach the problem of climate sensitivity, climate prediction, and climate change from a radically new perspective.
Resumo:
In a world of almost permanent and rapidly increasing electronic data availability, techniques of filtering, compressing, and interpreting this data to transform it into valuable and easily comprehensible information is of utmost importance. One key topic in this area is the capability to deduce future system behavior from a given data input. This book brings together for the first time the complete theory of data-based neurofuzzy modelling and the linguistic attributes of fuzzy logic in a single cohesive mathematical framework. After introducing the basic theory of data-based modelling, new concepts including extended additive and multiplicative submodels are developed and their extensions to state estimation and data fusion are derived. All these algorithms are illustrated with benchmark and real-life examples to demonstrate their efficiency. Chris Harris and his group have carried out pioneering work which has tied together the fields of neural networks and linguistic rule-based algortihms. This book is aimed at researchers and scientists in time series modeling, empirical data modeling, knowledge discovery, data mining, and data fusion.
Resumo:
Using the formalism of the Ruelle response theory, we study how the invariant measure of an Axiom A dynamical system changes as a result of adding noise, and describe how the stochastic perturbation can be used to explore the properties of the underlying deterministic dynamics. We first find the expression for the change in the expectation value of a general observable when a white noise forcing is introduced in the system, both in the additive and in the multiplicative case. We also show that the difference between the expectation value of the power spectrum of an observable in the stochastically perturbed case and of the same observable in the unperturbed case is equal to the variance of the noise times the square of the modulus of the linear susceptibility describing the frequency-dependent response of the system to perturbations with the same spatial patterns as the considered stochastic forcing. This provides a conceptual bridge between the change in the fluctuation properties of the system due to the presence of noise and the response of the unperturbed system to deterministic forcings. Using Kramers-Kronig theory, it is then possible to derive the real and imaginary part of the susceptibility and thus deduce the Green function of the system for any desired observable. We then extend our results to rather general patterns of random forcing, from the case of several white noise forcings, to noise terms with memory, up to the case of a space-time random field. Explicit formulas are provided for each relevant case analysed. As a general result, we find, using an argument of positive-definiteness, that the power spectrum of the stochastically perturbed system is larger at all frequencies than the power spectrum of the unperturbed system. We provide an example of application of our results by considering the spatially extended chaotic Lorenz 96 model. These results clarify the property of stochastic stability of SRB measures in Axiom A flows, provide tools for analysing stochastic parameterisations and related closure ansatz to be implemented in modelling studies, and introduce new ways to study the response of a system to external perturbations. Taking into account the chaotic hypothesis, we expect that our results have practical relevance for a more general class of system than those belonging to Axiom A.
Resumo:
An in vitro colon extended physiologically based extraction test (CEPBET) which incorporates human gastrointestinal tract (GIT) parameters (including pH and chemistry, solid-to-fluid ratio, mixing and emptying rates) was applied for the first time to study the bioaccessibility of brominated flame retardants (BFRs) from the 3 main GIT compartments (stomach, small intestine and colon) following ingestion of indoor dust. Results revealed the bioaccessibility of γ-HBCD (72%) was less than that for α- and β-isomers (92% and 80% respectively) which may be attributed to the lower aqueous solubility of the γ-isomer (2 μg L−1) compared to the α- and β-isomers (45 and 15 μg L−1 respectively). No significant change in the enantiomeric fractions of HBCDs was observed in any of the studied samples. However, this does not completely exclude the possibility of in vivo enantioselective absorption of HBCDs, as the GIT cell lining and bacterial flora – which may act enantioselectively – are not included in the current CE-PBET model. While TBBP-A was almost completely (94%) bioaccessible, BDE-209 was the least (14%) bioaccessible of the studied BFRs. Bioaccessibility of tri-hepta BDEs ranged from 32–58%. No decrease in the bioaccessibility with increasing level of bromination was observed in the studied PBDEs.
Resumo:
The recovery of the Arctic polar vortex following stratospheric sudden warmings is found to take upward of 3 months in a particular subset of cases, termed here polar-night jet oscillation (PJO) events. The anomalous zonal-mean circulation above the pole during this recovery is characterized by a persistently warm lower stratosphere, and above this a cold midstratosphere and anomalously high stratopause, which descends as the event unfolds. Composites of these events in the Canadian Middle Atmosphere Model show the persistence of the lower-stratospheric anomaly is a result of strongly suppressed wave driving and weak radiative cooling at these heights. The upper-stratospheric and lower-mesospheric anomalies are driven immediately following the warming by anomalous planetary-scale eddies, following which, anomalous parameterized nonorographic and orographic gravity waves play an important role. These details are found to be robust for PJO events (as opposed to sudden warmings in general) in that many details of individual PJO events match the composite mean. Azonal-mean quasigeostrophic model on the sphere is shown to reproduce the response to the thermal and mechanical forcings produced during a PJO event. The former is well approximated by Newtonian cooling. The response can thus be considered as a transient approach to the steady-state, downward control limit. In this context, the time scale of the lower-stratospheric anomaly is determined by the transient, radiative response to the extended absence of wave driving. The extent to which the dynamics of the wave-driven descent of the stratopause can be considered analogous to the descending phases of the quasi-biennial oscillation (QBO) is also discussed.
Resumo:
This paper generalises and applies recently developed blocking diagnostics in a two- dimensional latitude-longitude context, which takes into consideration both mid- and high-latitude blocking. These diagnostics identify characteristics of the associated wave-breaking as seen in the potential temperature (θ) on the dynamical tropopause, in particular the cyclonic or anticyclonic Direction of wave-Breaking (DB index), and the Relative Intensity (RI index) of the air masses that contribute to blocking formation. The methodology is extended to a 2-D domain and a cluster technique is deployed to classify mid- and high-latitude blocking according to the wave-breaking characteristics. Mid-latitude blocking is observed over Europe and Asia, where the meridional gradient of θ is generally weak, whereas high-latitude blocking is mainly present over the oceans, to the north of the jet-stream, where the meridional gradient of θ is much stronger. They occur respectively on the equatorward and poleward flank of the jet- stream, where the horizontal shear ∂u/∂y is positive in the first case and negative in the second case. A regional analysis is also conducted. It is found that cold-anticyclonic and cyclonic blocking divert the storm-track respectively to the south and to the north over the East Atlantic and western Europe. Furthermore, warm-cyclonic blocking over the Pacific and cold-anticyclonic blocking over Europe are identified as the most persistent types and are associated with large amplitude anomalies in temperature and precipitation. Finally, the high-latitude, cyclonic events seem to correlate well with low- frequency modes of variability over the Pacific and Atlantic Ocean.
Resumo:
It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.
Resumo:
Sea surface temperature (SST) can be estimated from day and night observations of the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) by optimal estimation (OE). We show that exploiting the 8.7 μm channel, in addition to the “traditional” wavelengths of 10.8 and 12.0 μm, improves OE SST retrieval statistics in validation. However, the main benefit is an improvement in the sensitivity of the SST estimate to variability in true SST. In a fair, single-pixel comparison, the 3-channel OE gives better results than the SST estimation technique presently operational within the Ocean and Sea Ice Satellite Application Facility. This operational technique is to use SST retrieval coefficients, followed by a bias-correction step informed by radiative transfer simulation. However, the operational technique has an additional “atmospheric correction smoothing”, which improves its noise performance, and hitherto had no analogue within the OE framework. Here, we propose an analogue to atmospheric correction smoothing, based on the expectation that atmospheric total column water vapour has a longer spatial correlation length scale than SST features. The approach extends the observations input to the OE to include the averaged brightness temperatures (BTs) of nearby clear-sky pixels, in addition to the BTs of the pixel for which SST is being retrieved. The retrieved quantities are then the single-pixel SST and the clear-sky total column water vapour averaged over the vicinity of the pixel. This reduces the noise in the retrieved SST significantly. The robust standard deviation of the new OE SST compared to matched drifting buoys becomes 0.39 K for all data. The smoothed OE gives SST sensitivity of 98% on average. This means that diurnal temperature variability and ocean frontal gradients are more faithfully estimated, and that the influence of the prior SST used is minimal (2%). This benefit is not available using traditional atmospheric correction smoothing.
Resumo:
Bayesian analysis is given of an instrumental variable model that allows for heteroscedasticity in both the structural equation and the instrument equation. Specifically, the approach for dealing with heteroscedastic errors in Geweke (1993) is extended to the Bayesian instrumental variable estimator outlined in Rossi et al. (2005). Heteroscedasticity is treated by modelling the variance for each error using a hierarchical prior that is Gamma distributed. The computation is carried out by using a Markov chain Monte Carlo sampling algorithm with an augmented draw for the heteroscedastic case. An example using real data illustrates the approach and shows that ignoring heteroscedasticity in the instrument equation when it exists may lead to biased estimates.
Resumo:
In addition to CO2, the climate impact of aviation is strongly influenced by non-CO2 emissions, such as nitrogen oxides, influencing ozone and methane, and water vapour, which can lead to the formation of persistent contrails in ice-supersaturated regions. Because these non-CO2 emission effects are characterised by a short lifetime, their climate impact largely depends on emission location and time; that is to say, emissions in certain locations (or times) can lead to a greater climate impact (even on the global average) than the same emission in other locations (or times). Avoiding these climate-sensitive regions might thus be beneficial to climate. Here, we describe a modelling chain for investigating this climate impact mitigation option. This modelling chain forms a multi-step modelling approach, starting with the simulation of the fate of emissions released at a certain location and time (time-region grid points). This is performed with the chemistry–climate model EMAC, extended via the two submodels AIRTRAC (V1.0) and CONTRAIL (V1.0), which describe the contribution of emissions to the composition of the atmosphere and to contrail formation, respectively. The impact of emissions from the large number of time-region grid points is efficiently calculated by applying a Lagrangian scheme. EMAC also includes the calculation of radiative impacts, which are, in a second step, the input to climate metric formulas describing the global climate impact of the emission at each time-region grid point. The result of the modelling chain comprises a four-dimensional data set in space and time, which we call climate cost functions and which describes the global climate impact of an emission at each grid point and each point in time. In a third step, these climate cost functions are used in an air traffic simulator (SAAM) coupled to an emission tool (AEM) to optimise aircraft trajectories for the North Atlantic region. Here, we describe the details of this new modelling approach and show some example results. A number of sensitivity analyses are performed to motivate the settings of individual parameters. A stepwise sanity check of the results of the modelling chain is undertaken to demonstrate the plausibility of the climate cost functions.
Resumo:
Extended cusp-like regions (ECRs) are surveyed, as observed by the Magnetospheric Ion Composition Sensor (MICS) of the Charge and Mass Magnetospheric Ion Composition Experiment (CAMMICE) instrument aboard Polar between 1996 and 1999. The first of these ECR events was observed on 29 May 1996, an event widely discussed in the literature and initially thought to be caused by tail lobe reconnection due to the coinciding prolonged interval of strong northward IMF. ECRs are characterized here by intense fluxes of magnetosheath-like ions in the energy-per-charge range of _1 to 10 keV e_1. We investigate the concurrence of ECRs with intervals of prolonged (lasting longer than 1 and 3 hours) orientations of the IMF vector and high solar wind dynamic pressure (PSW). Also investigated is the opposite concurrence, i.e., of the IMF and high PSW with ECRs. (Note that these surveys are asking distinctly different questions.) The former survey indicates that ECRs have no overall preference for any orientation of the IMF. However, the latter survey reveals that during northward IMF, particularly when accompanied by high PSW, ECRs are more likely. We also test for orbital and seasonal effects revealing that Polar has to be in a particular region to observe ECRs and that they occur more frequently around late spring. These results indicate that ECRs have three distinct causes and so can relate to extended intervals in (1) the cusp on open field lines, (2) the magnetosheath, and (3) the magnetopause indentation at the cusp, with the latter allowing magnetosheath plasma to approach close to the Earth without entering the magnetosphere.
Resumo:
Objectives: This study provides the first large scale analysis of the age at which adolescents in medieval England entered and completed the pubertal growth spurt. This new method has implications for expanding our knowledge of adolescent maturation across different time periods and regions. Methods: In total, 994 adolescent skeletons (10-25 years) from four urban sites in medieval England (AD 900-1550) were analysed for evidence of pubertal stage using new osteological techniques developed from the clinical literature (i.e. hamate hook development, CVM, canine mineralisation, iliac crest ossification, radial fusion). Results: Adolescents began puberty at a similar age to modern children at around 10-12 years, but the onset of menarche in girls was delayed by up to 3 years, occurring around 15 for most in the study sample and 17 years for females living in London. Modern European males usually complete their maturation by 16-18 years; medieval males took longer with the deceleration stage of the growth spurt extending as late as 21 years. Conclusions: This research provides the first attempt to directly assess the age of pubertal development in adolescents during the tenth to seventeenth centuries. Poor diet, infections, and physical exertion may have contributed to delayed development in the medieval adolescents, particularly for those living in the city of London. This study sheds new light on the nature of adolescence in the medieval period, highlighting an extended period of physical and social transition.
Resumo:
Background: Endovascular procedures and direct surgical clipping, are the main therapeutic modalities for managing of BAAs. Furthermore, giant or wide-necked aneurysms and those that involve the PCA or perforators at its neck usually are not embolized. Case Description: A 55-year-old man presented to the emergency room complaining Of Sudden and intense headache. Neurological examination evidenced meningismus. Computed tomography disclosed a subarachnoid hemorrhage (Fisher grade III). Arteriograms revealed BAA, whose neck was partially obseured by the PCP. A standard pterional craniotomy was performed, followed. by extensive drilling of the greater sphenoid wing. The neck was partially hidden by the PCP, and no proximal control was obtained without drilling the PCP and opening the CS (modified TcA). Drilling of the PCP was begun by cutting the overlying dura and extended caudally as much as possible. Next. opening, of the roof of the CS was performed by incising the dura in the oculomotor trigone medical and parallel 10 the oculomotor nerve and lateral to ICA: the incision progressed posteriorly toward the dorsum sellae. Further resection of the dorsum sellac and clivus was carried out. After performing these steps, proximal control was obtained, aneurysm was deflated, perforators were saved. and aneurysm was clipped. Conclusions: This study has demonstrated the clinical Usefulness of and abbreviated form of the TcA, which led the ""modified TcA."" in approaching complex low-lying, BAA. It provides additional surgical room by removing the PCP and partially, opening the CS, which permits further bone removal and improves exposure. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
In this paper we present a finite difference method for solving two-dimensional viscoelastic unsteady free surface flows governed by the single equation version of the eXtended Pom-Pom (XPP) model. The momentum equations are solved by a projection method which uncouples the velocity and pressure fields. We are interested in low Reynolds number flows and, to enhance the stability of the numerical method, an implicit technique for computing the pressure condition on the free surface is employed. This strategy is invoked to solve the governing equations within a Marker-and-Cell type approach while simultaneously calculating the correct normal stress condition on the free surface. The numerical code is validated by performing mesh refinement on a two-dimensional channel flow. Numerical results include an investigation of the influence of the parameters of the XPP equation on the extrudate swelling ratio and the simulation of the Barus effect for XPP fluids. (C) 2010 Elsevier B.V. All rights reserved.