943 resultados para Successive Overrelaxation method with 2 parameters
Resumo:
A recent nonlinear system by Friston et al. (2000. NeuroImage 12: 466–477) links the changes in BOLD response to changes in neural activity. The system consists of five subsystems, linking: (1) neural activity to flow changes; (2) flow changes to oxygen delivery to tissue; (3) flow changes to changes in blood volume and venous outflow; (4) changes in flow, volume, and oxygen extraction fraction to deoxyhemoglobin changes; and finally (5) volume and deoxyhemoglobin changes to the BOLD response. Friston et al. exploit, in subsystem 2, a model by Buxton and Frank coupling flow changes to changes in oxygen metabolism which assumes tissue oxygen concentration to be close to zero. We describe below a model of the coupling between flow and oxygen delivery which takes into account the modulatory effect of changes in tissue oxygen concentration. The major development has been to extend the original Buxton and Frank model for oxygen transport to a full dynamic capillary model making the model applicable to both transient and steady state conditions. Furthermore our modification enables us to determine the time series of CMRO2 changes under different conditions, including CO2 challenges. We compare the differences in the performance of the “Friston system” using the original model of Buxton and Frank and that of our model. We also compare the data predicted by our model (with appropriate parameters) to data from a series of OIS studies. The qualitative differences in the behaviour of the models are exposed by different experimental simulations and by comparison with the results of OIS data from brief and extended stimulation protocols and from experiments using hypercapnia.
Resumo:
We present a Galerkin method with piecewise polynomial continuous elements for fully nonlinear elliptic equations. A key tool is the discretization proposed in Lakkis and Pryer, 2011, allowing us to work directly on the strong form of a linear PDE. An added benefit to making use of this discretization method is that a recovered (finite element) Hessian is a byproduct of the solution process. We build on the linear method and ultimately construct two different methodologies for the solution of second order fully nonlinear PDEs. Benchmark numerical results illustrate the convergence properties of the scheme for some test problems as well as the Monge–Amp`ere equation and the Pucci equation.
Resumo:
In this paper ensembles of forecasts (of up to six hours) are studied from a convection-permitting model with a representation of model error due to unresolved processes. The ensemble prediction system (EPS) used is an experimental convection-permitting version of the UK Met Office’s 24- member Global and Regional Ensemble Prediction System (MOGREPS). The method of representing model error variability, which perturbs parameters within the model’s parameterisation schemes, has been modified and we investigate the impact of applying this scheme in different ways. These are: a control ensemble where all ensemble members have the same parameter values; an ensemble where the parameters are different between members, but fixed in time; and ensembles where the parameters are updated randomly every 30 or 60 min. The choice of parameters and their ranges of variability have been determined from expert opinion and parameter sensitivity tests. A case of frontal rain over the southern UK has been chosen, which has a multi-banded rainfall structure. The consequences of including model error variability in the case studied are mixed and are summarised as follows. The multiple banding, evident in the radar, is not captured for any single member. However, the single band is positioned in some members where a secondary band is present in the radar. This is found for all ensembles studied. Adding model error variability with fixed parameters in time does increase the ensemble spread for near-surface variables like wind and temperature, but can actually decrease the spread of the rainfall. Perturbing the parameters periodically throughout the forecast does not further increase the spread and exhibits “jumpiness” in the spread at times when the parameters are perturbed. Adding model error variability gives an improvement in forecast skill after the first 2–3 h of the forecast for near-surface temperature and relative humidity. For precipitation skill scores, adding model error variability has the effect of improving the skill in the first 1–2 h of the forecast, but then of reducing the skill after that. Complementary experiments were performed where the only difference between members was the set of parameter values (i.e. no initial condition variability). The resulting spread was found to be significantly less than the spread from initial condition variability alone.
Resumo:
This paper presents the development of a rapid method with ultraperformance liquid chromatography–tandem mass spectrometry (UPLC-MS/MS) for the qualitative and quantitative analyses of plant proanthocyanidins directly from crude plant extracts. The method utilizes a range of cone voltages to achieve the depolymerization step in the ion source of both smaller oligomers and larger polymers. The formed depolymerization products are further fragmented in the collision cell to enable their selective detection. This UPLC-MS/MS method is able to separately quantitate the terminal and extension units of the most common proanthocyanidin subclasses, that is, procyanidins and prodelphinidins. The resulting data enable (1) quantitation of the total proanthocyanidin content, (2) quantitation of total procyanidins and prodelphinidins including the procyanidin/prodelphinidin ratio, (3) estimation of the mean degree of polymerization for the oligomers and polymers, and (4) estimation of how the different procyanidin and prodelphinidin types are distributed along the chromatographic hump typically produced by large proanthocyanidins. All of this is achieved within the 10 min period of analysis, which makes the presented method a significant addition to the chemistry tools currently available for the qualitative and quantitative analyses of complex proanthocyanidin mixtures from plant extracts.
Resumo:
Autism Spectrum Disorder (ASD) is diagnosed on the basis of behavioral symptoms, but cognitive abilities may also be useful in characterizing individuals with ASD. One hundred seventy-eight high-functioning male adults, half with ASD and half without, completed tasks assessing IQ, a broad range of cognitive skills, and autistic and comorbid symptomatology. The aims of the study were, first, to determine whether significant differences existed between cases and controls on cognitive tasks, and whether cognitive profiles, derived using a multivariate classification method with data from multiple cognitive tasks, could distinguish between the two groups. Second, to establish whether cognitive skill level was correlated with degree of autistic symptom severity, and third, whether cognitive skill level was correlated with degree of comorbid psychopathology. Fourth, cognitive characteristics of individuals with Asperger Syndrome (AS) and high-functioning autism (HFA) were compared. After controlling for IQ, ASD and control groups scored significantly differently on tasks of social cognition, motor performance, and executive function (P's < 0.05). To investigate cognitive profiles, 12 variables were entered into a support vector machine (SVM), which achieved good classification accuracy (81%) at a level significantly better than chance (P < 0.0001). After correcting for multiple correlations, there were no significant associations between cognitive performance and severity of either autistic or comorbid symptomatology. There were no significant differences between AS and HFA groups on the cognitive tasks. Cognitive classification models could be a useful aid to the diagnostic process when used in conjunction with other data sources-including clinical history.
Resumo:
Seamless phase II/III clinical trials are conducted in two stages with treatment selection at the first stage. In the first stage, patients are randomized to a control or one of k > 1 experimental treatments. At the end of this stage, interim data are analysed, and a decision is made concerning which experimental treatment should continue to the second stage. If the primary endpoint is observable only after some period of follow-up, at the interim analysis data may be available on some early outcome on a larger number of patients than those for whom the primary endpoint is available. These early endpoint data can thus be used for treatment selection. For two previously proposed approaches, the power has been shown to be greater for one or other method depending on the true treatment effects and correlations. We propose a new approach that builds on the previously proposed approaches and uses data available at the interim analysis to estimate these parameters and then, on the basis of these estimates, chooses the treatment selection method with the highest probability of correctly selecting the most effective treatment. This method is shown to perform well compared with the two previously described methods for a wide range of true parameter values. In most cases, the performance of the new method is either similar to or, in some cases, better than either of the two previously proposed methods.
Resumo:
Scintillometry, a form of ground-based remote sensing, provides the capability to estimate surface heat fluxes over scales of a few hundred metres to kilometres. Measurements are spatial averages, making this technique particularly valuable over areas with moderate heterogeneity such as mixed agricultural or urban environments. In this study, we present the structure parameters of temperature and humidity, which can be related to the sensible and latent heat fluxes through similarity theory, for a suburban area in the UK. The fluxes are provided in the second paper of this two-part series. A millimetre-wave scintillometer was combined with an infrared scintillometer along a 5.5 km path over northern Swindon. The pairing of these two wavelengths offers sensitivity to both temperature and humidity fluctuations, and the correlation between wavelengths is also used to retrieve the path-averaged temperature–humidity correlation. Comparison is made with structure parameters calculated from an eddy covariance station located close to the centre of the scintillometer path. The performance of the measurement techniques under different conditions is discussed. Similar behaviour is seen between the two data sets at sub-daily timescales. For the two summer-to-winter periods presented here, similar evolution is displayed across the seasons. A higher vegetation fraction within the scintillometer source area is consistent with the lower Bowen ratio observed (midday Bowen ratio < 1) compared with more built-up areas around the eddy covariance station. The energy partitioning is further explored in the companion paper.
Resumo:
This study has compared preliminary estimates of effective leaf area index (LAI) derived from fish-eye lens photographs to those estimated from airborne full-waveform small-footprint LiDAR data for a forest dataset in Australia. The full-waveform data was decomposed and optimized using a trust-region-reflective algorithm to extract denser point clouds. LAI LiDAR estimates were derived in two ways (1) from the probability of discrete pulses reaching the ground without being intercepted (point method) and (2) from raw waveform canopy height profile processing adapted to small-footprint laser altimetry (waveform method) accounting for reflectance ratio between vegetation and ground. The best results, that matched hemispherical photography estimates, were achieved for the waveform method with a study area-adjusted reflectance ratio of 0.4 (RMSE of 0.15 and 0.03 at plot and site level, respectively). The point method generally overestimated, whereas the waveform method with an arbitrary reflectance ratio of 0.5 underestimated the fish-eye lens LAI estimates.
Resumo:
Attitudes towards risk and uncertainty have been indicated to be highly context-dependent, and to be sensitive to the measurement technique employed. We present data collected in controlled experiments with 2,939 subjects in 30 countries measuring risk and uncertainty attitudes through incentivized measures as well as survey questions. Our data show clearly that measures correlate not only within decision contexts or measurement methods, but also across contexts and methods. This points to the existence of one underlying “risk preference”, which influences attitudes independently of the measurement method or choice domain. We furthermore find that answers to a general and a financial survey question correlate with incentivized lottery choices in most countries. Incentivized and survey measures also correlate significantly between countries. This opens the possibility to conduct cultural comparisons on risk attitudes using survey instruments.
Resumo:
The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.
Resumo:
Mathematical relationships between Scoring Parameters can be used in Economic Scoring Formulas (ESF) in tendering to distribute the score among bidders in the economic part of a proposal. Each contracting authority must set an ESF when publishing tender specifications and the strategy of each bidder will differ depending on the ESF selected and the weight of the overall proposal scoring. This paper introduces the various mathematical relationships and density distributions that describe and inter-relate not only the main Scoring Parameters but the main Forecasting Parameters in any capped tender (those whose price is upper-limited). Forecasting Parameters, as variables that can be known in advance before the deadline of a tender is reached, together with Scoring Parameters constitute the basis of a future Bid Tender Forecasting Model.
Resumo:
Zygomatic arch fractures often occur as part of a zygoma fracture or Le Fort type III fractures of the maxillary. Isolated fractures of the zygomatic arch comprise around 10% of all zygoma fractures. The main etiologic factors are traffic accidents, falls, assaults, and sport accidents. Treatment may involve minimally invasive surgical procedures for slightly dislocated fractures or surgery with more extensive access for large dislocations of bone segments. This article reports the case of a 41-year-old male victim of physical aggression to the face with a steel sickle with an exposed, unstable fracture of the zygomatic arch. The patient underwent general anesthesia, and after the reduction of the fractures, the bone segments were fixed with 2.0-mm screws.
Resumo:
Objective Underreporting of energy intake is prevalent in food surveys, but there is controversy about which dietary assessment method provides greater underreporting rates. Our objective is to compare validity of self-reported energy intake obtained by three dietary assessment methods with total energy expenditure (TEE) obtained by doubly labeled water (DLW) among Brazilian women. Design We used a cross-sectional study. Subjects/setting Sixty-five females aged 18 to 57 years (28 normal-weight, 10 over-weight, and 27 obese) were recruited from two universities to participate. Main outcome measures TEE determined by DLW, energy intake estimated by three 24-hour recalls, 3-day food record, and a food frequency questionnaire (FFQ). Statistical analyses performed Regression and analysis of variance with repeated measures compared TEE and energy intake values, and energy intake-to-TEE ratios and energy intake-TEE values between dietary assessment methods. Bland and Altman plots were provided for each method. chi(2) test compared proportion of underreporters between the methods. Results Mean TEE was 2,622 kcal (standard deviation [SD] =490 kcal), while mean energy intake was 2,078 kcal (SD=430 kcal) for the diet recalls; 2,044 kcal (SD=479 kcal) for the food record and 1,984 kcal (SD=832 kcal) for the FFQ (all energy intake values significantly differed from TEE; P<0.0001). Bland and Altman plots indicated great dispersion, negative mean differences between measurements, and wide limits of agreement. Obese subjects underreported more than normal-weight subjects in the diet recalls and in the food records, but not in the FFQ. Years of education, income and ethnicity were associated with reporting accuracy. Conclusions The FFQ produced greater under- and overestimation of energy intake. Underreporting of energy intake is a serious and prevalent error in dietary self-reports provided by Brazilian women, as has been described in studies conducted in developed countries.
Resumo:
Dietary soy lecithin supplementation decreases hyperlipidemia and influences lipid metabolism. Although this product is used by diabetic patients, there are no data about the effect of soy lecithin supplementation on the immune system. The addition of phosphatidylcholine, the main component of lecithin, to a culture of lymphocytes has been reported to alter their function. If phosphatidylcholine changes lymphocyte functions in vitro as previously shown, then it could also affect immune cells in vivo. In the present study, the effect of dietary soy lecithin oil macrophage phagocytic capacity and on lymphocyte number in response to concanavalin A (ConA) stimulation was investigated in non-diabetic and alloxan-induced diabetic rats. Supplementation was carried Out daily with 2 g kg(-1) b.w. lecithin during 7 days. After that, blood was drawn from fasting rats and peritoneal macrophages and mesenteric lymph node lymphocytes were collected to determine the phospholipid content. Plasma triacylglycerol (TAG), total and HDL cholesterol and glucose levels were also determined. Lymphocytes were stimulated by Conk The MTT (3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide) dye reduction method and flow cytometry were employed to evaluate lymphocyte metabolism and cell number, respectively. Soy lecithin supplementation significantly increased both macrophage phagocytic capacity (+29%) in non-diabetic rats and the lymphocyte number in diabetic rats (+92%). It is unlikely that plasma lipid levels indirectly affect immune cells, since plasma cholesterol, TAG, or phospholipid content was not modified by lecithin supplementation. In Conclusion, lymphocyte and macrophage function were altered by lecithin supplementation, indicating ail immunomodulatory effect of phosphatidylcholine. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
The paper studies a class of a system of linear retarded differential difference equations with several parameters. It presents some sufficient conditions under which no stability changes for an equilibrium point occurs. Application of these results is given. (c) 2007 Elsevier Ltd. All rights reserved.