981 resultados para sequential data
Resumo:
Earthworks tasks aim at levelling the ground surface at a target construction area and precede any kind of structural construction (e.g., road and railway construction). It is comprised of sequential tasks, such as excavation, transportation, spreading and compaction, and it is strongly based on heavy mechanical equipment and repetitive processes. Under this context, it is essential to optimize the usage of all available resources under two key criteria: the costs and duration of earthwork projects. In this paper, we present an integrated system that uses two artificial intelligence based techniques: data mining and evolutionary multi-objective optimization. The former is used to build data-driven models capable of providing realistic estimates of resource productivity, while the latter is used to optimize resource allocation considering the two main earthwork objectives (duration and cost). Experiments held using real-world data, from a construction site, have shown that the proposed system is competitive when compared with current manual earthwork design.
Resumo:
We report on a series of experiments that test the effects of an uncertain supply on the formation of bids and prices in sequential first-price auctions with private-independent values and unit-demands. Supply is assumed uncertain when buyers do not know the exact number of units to be sold (i.e., the length of the sequence). Although we observe a non-monotone behavior when supply is certain and an important overbidding, the data qualitatively support our price trend predictions and the risk neutral Nash equilibrium model of bidding for the last stage of a sequence, whether supply is certain or not. Our study shows that behavior in these markets changes significantly with the presence of an uncertain supply, and that it can be explained by assuming that bidders formulate pessimistic beliefs about the occurrence of another stage.
Resumo:
BACKGROUND: Among patients with steroid-refractory ulcerative colitis (UC) in whom a first rescue therapy has failed, a second line salvage treatment can be considered to avoid colectomy. AIM: To evaluate the efficacy and safety of second or third line rescue therapy over a one-year period. METHODS: Response to single or sequential rescue treatments with infliximab (5mg/kg intravenously (iv) at week 0, 2, 6 and then every 8weeks), ciclosporin (iv 2mg/kg/daily and then oral 5mg/kg/daily) or tacrolimus (0.05mg/kg divided in 2 doses) in steroid-refractory moderate to severe UC patients from 7 Swiss and 1 Serbian tertiary IBD centers was retrospectively studied. The primary endpoint was the one year colectomy rate. RESULTS: 60% of patients responded to the first rescue therapy, 10% went to colectomy and 30% non-responders were switched to a 2(nd) line rescue treatment. 66% of patients responded to the 2(nd) line treatment whereas 34% failed, of which 15% went to colectomy and 19% received a 3(rd) line rescue treatment. Among those, 50% patients went to colectomy. Overall colectomy rate of the whole cohort was 18%. Steroid-free remission rate was 39%. The adverse event rates were 33%, 37.5% and 30% for the first, second and third line treatment respectively. CONCLUSION: Our data show that medical intervention even with 2(nd) and 3(rd) rescue treatments decreased colectomy frequency within one year of follow up. A longer follow-up will be necessary to investigate whether sequential therapy will only postpone colectomy and what percentage of patients will remain in long-term remission.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
AIMS: To investigate empirically the hypothesized relationship between counsellor motivational interviewing (MI) skills and patient change talk (CT) by analysing the articulation between counsellor behaviours and patient language during brief motivational interventions (BMI) addressing at-risk alcohol consumption. DESIGN: Sequential analysis of psycholinguistic codes obtained by two independent raters using the Motivational Interviewing Skill Code (MISC), version 2.0. SETTING: Secondary analysis of data from a randomized controlled trial evaluating the effectiveness of BMI in an emergency department. PARTICIPANTS: A total of 97 patients tape-recorded when receiving BMI. MEASUREMENTS: MISC variables were categorized into three counsellor behaviours (MI-consistent, MI-inconsistent and 'other') and three kinds of patient language (CT, counter-CT (CCT) and utterances not linked with the alcohol topic). Observed transition frequencies, conditional probabilities and significance levels based on odds ratios were computed using sequential analysis software. FINDINGS: MI-consistent behaviours were the only counsellor behaviours that were significantly more likely to be followed by patient CT. Those behaviours were significantly more likely to be followed by patient change exploration (CT and CCT) while MI-inconsistent behaviours and 'other' counsellor behaviours were significantly more likely to be followed by utterances not linked with the alcohol topic and significantly less likely to be followed by CT. MI-consistent behaviours were more likely after change exploration, whereas 'other' counsellor behaviours were more likely only after utterances not linked with the alcohol topic. CONCLUSIONS: Findings lend support to the hypothesized relationship between MI-consistent behaviours and CT, highlight the importance of patient influence on counsellor behaviour and emphasize the usefulness of MI techniques and spirit during brief interventions targeting change enhancement.
Resumo:
This study examined the validity and reliability of a sequential "Run-Bike-Run" test (RBR) in age-group triathletes. Eight Olympic distance (OD) specialists (age 30.0 ± 2.0 years, mass 75.6 ± 1.6 kg, run VO2max 63.8 ± 1.9 ml· kg(-1)· min(-1), cycle VO2peak 56.7 ± 5.1 ml· kg(-1)· min(-1)) performed four trials over 10 days. Trial 1 (TRVO2max) was an incremental treadmill running test. Trials 2 and 3 (RBR1 and RBR2) involved: 1) a 7-min run at 15 km· h(-1) (R1) plus a 1-min transition to 2) cycling to fatigue (2 W· kg(-1) body mass then 30 W each 3 min); 3) 10-min cycling at 3 W· kg(-1) (Bsubmax); another 1-min transition and 4) a second 7-min run at 15 km· h(-1) (R2). Trial 4 (TT) was a 30-min cycle - 20-min run time trial. No significant differences in absolute oxygen uptake (VO2), heart rate (HR), or blood lactate concentration ([BLA]) were evidenced between RBR1 and RBR2. For all measured physiological variables, the limits of agreement were similar, and the mean differences were physiologically unimportant, between trials. Low levels of test-retest error (i.e. ICC <0.8, CV<10%) were observed for most (logged) measurements. However [BLA] post R1 (ICC 0.87, CV 25.1%), [BLA] post Bsubmax (ICC 0.99, CV 16.31) and [BLA] post R2 (ICC 0.51, CV 22.9%) were least reliable. These error ranges may help coaches detect real changes in training status over time. Moreover, RBR test variables can be used to predict discipline specific and overall TT performance. Cycle VO2peak, cycle peak power output, and the change between R1 and R2 (deltaR1R2) in [BLA] were most highly related to overall TT distance (r = 0.89, p < 0. 01; r = 0.94, p < 0.02; r = 0.86, p < 0.05, respectively). The percentage of TR VO2max at 15 km· h(-1), and deltaR1R2 HR, were also related to run TT distance (r = -0.83 and 0.86, both p < 0.05).
Resumo:
Classical treatments of problems of sequential mate choice assume that the distribution of the quality of potential mates is known a priori. This assumption, made for analytical purposes, may seem unrealistic, opposing empirical data as well as evolutionary arguments. Using stochastic dynamic programming, we develop a model that includes the possibility for searching individuals to learn about the distribution and in particular to update mean and variance during the search. In a constant environment, a priori knowledge of the parameter values brings strong benefits in both time needed to make a decision and average value of mate obtained. Knowing the variance yields more benefits than knowing the mean, and benefits increase with variance. However, the costs of learning become progressively lower as more time is available for choice. When parameter values differ between demes and/or searching periods, a strategy relying on fixed a priori information might lead to erroneous decisions, which confers advantages on the learning strategy. However, time for choice plays an important role as well: if a decision must be made rapidly, a fixed strategy may do better even when the fixed image does not coincide with the local parameter values. These results help in delineating the ecological-behavior context in which learning strategies may spread.
Resumo:
In this work we study older workers'(50-64) labor force transitions after a health/disability shock. We find that the probability of keeping working decreases with both age and severity of the shock. Moreover, we find strong interactions between age and severity in the 50-64 age range and none in the 30-49 age range. Regarding demographics we find that being female and married reduce the probability of keeping work. On the contrary, being main breadwinner, education and skill levels increase it. Interestingly, the effect of some demographics changes its sign when we look at transitions from inactivity to work. This is the case of being married or having a working spouse. Undoubtedly, leisure complementarities should play a role in the latter case. Since the data we use contains a very detailed information on disabilities, we are able to evaluate the marginal effect of each type of disability either in the probability of keeping working or in returning back to work. Some of these results may have strong policy implications.
Resumo:
In this work we study older workers (50 64) labor force transitions after a health/disability shock. We find that the probability of keeping working decreases with both age and severity of the shock. Moreover, we find strong interactions between age and severity in the 50 64 age range and none in the 30 49 age range. Regarding demographics we find that being female and married reduce the probability of keeping work. On the contrary, being main breadwinner, education and skill levels increase it. Interestingly, the effect of some demographics changes its sign when we look at transitions from inactivity to work. This is the case of being married or having a working spouse. Undoubtedly, leisure complementarities should play a role in the latter case. Since the data we use contains a very detailed information on disabilities, we are able to evaluate the marginal effect of each type of disability either in the probability of keeping working or in returning back to work. Some of these results may have strong policy implications.
Resumo:
Biogeographic studies dealing with Bombyliidae are rare in the literature and no information is available on its origin and early diversification. In this study, we found evidence from molecular phylogeny and from fossil record supporting a Middle Jurassic origin of the Bombylioidea, taken as a starting point to discuss the biogeography and diversification of Crocidiinae. Based on a previously published phylogenetic hypothesis, we performed a Brooks Parsimony Analysis (BPA) to discuss the biogeographical history of Crocidiinae lineages. This subfamily is mostly distributed over arid areas of the early components of the Gondwanaland: Chile and southern Africa, but also in southwestern Palaearctic and southwestern Nearctic. The vicariant events affecting the Crocidiinae biogeography at the generic level seems to be related to the sequential separation of a Laurasian clade from a Gondwanan clade followed by the splitting of the latter into smaller components. This also leads to a hypothesis of origin of the Crocidiinae in the Middle Jurassic, the same period in which other bombyliid lineages are supposed to have arisen and irradiated.
Resumo:
BACKGROUND: Both induction chemotherapy followed by irradiation and concurrent chemotherapy and radiotherapy have been reported as valuable alternatives to total laryngectomy in patients with advanced larynx or hypopharynx cancer. We report results of the randomized phase 3 trial 24954 from the European Organization for Research and Treatment of Cancer. METHODS: Patients with resectable advanced squamous cell carcinoma of the larynx (tumor stage T3-T4) or hypopharynx (T2-T4), with regional lymph nodes in the neck staged as N0-N2 and with no metastasis, were randomly assigned to treatment in the sequential (or control) or the alternating (or experimental) arm. In the sequential arm, patients with a 50% or more reduction in primary tumor size after two cycles of cisplatin and 5-fluorouracil received another two cycles, followed by radiotherapy (70 Gy total). In the alternating arm, a total of four cycles of cisplatin and 5-fluorouracil (in weeks 1, 4, 7, and 10) were alternated with radiotherapy with 20 Gy during the three 2-week intervals between chemotherapy cycles (60 Gy total). All nonresponders underwent salvage surgery and postoperative radiotherapy. The Kaplan-Meier method was used to obtain time-to-event data. RESULTS: The 450 patients were randomly assigned to treatment (224 to the sequential arm and 226 to the alternating arm). Median follow-up was 6.5 years. Survival with a functional larynx was similar in sequential and alternating arms (hazard ratio of death and/or event = 0.85, 95% confidence interval = 0.68 to 1.06), as were median overall survival (4.4 and 5.1 years, respectively) and median progression-free interval (3.0 and 3.1 years, respectively). Grade 3 or 4 mucositis occurred in 64 (32%) of the 200 patients in the sequential arm who received radiotherapy and in 47 (21%) of the 220 patients in the alternating arm. Late severe edema and/or fibrosis was observed in 32 (16%) patients in the sequential arm and in 25 (11%) in the alternating arm. CONCLUSIONS: Larynx preservation, progression-free interval, and overall survival were similar in both arms, as were acute and late toxic effects.
Resumo:
Purpose: Letrozole (LET) has recently been shown to be superior to tamoxifen for postmenopausal patients (pts). In addition, LET radiosensitizes breast cancer cells in vitro. We conducted a phase II randomized study to evaluate concurrent and sequential radiotherapy (RT)-LET in the adjuvant setting. We present here clinical results with a minimum follow-up of 24 months. Patients and Methods: Postmenopausal pts with early-stage breast cancer were randomized after conservative surgery to either: A) concurrent RT-LET (LET started 3 weeks before the first day of RT) or B) sequential RT-LET (LET started 3 weeks after the end of RT). Whole breast RT was delivered to a total dose of 50 Gy. A 10-16 Gy boost was allowed according to age and pathological prognostic factors. Pts were stratified by center, adjuvant chemotherapy, boost, and radiation-induced CD8 apoptosis (RILA). RILA was performed before RT as previously published (Ozsahin et al. Clin Cancer Res, 2005). An independent monitoring committee reviewed individual safety data. Skin toxicities were evaluated by two different clinicians at each medical visit (CTCAE v3.0). Lung CT-scan and functional pulmonary tests were performed regularly. DNA samples were screened for SNPs in candidate genes as recently published (Azria et al., Clin Cancer Res, 2008). Results: A total of 150 pts were randomized between 01/05 and 02/07. Median follow-up is 26 months (range, 3-40 months). No statistical differences were identified between the two arms in terms of mean age; initial TNM; median surgical bed volume; post surgical breast volume. Chemotherapy and RT boost were delivered in 19% and 38% of pts, respectively. Nodes received 50 Gy in 23% of patients without differences between both arms. During RT and within the first 6 weeks after RT, 10 patients (6.7%) presented grade 3 acute skin dermatitis during RT but no differences were observed between both arms (4 and 6 patients in arm A and B, respectively). At 26 month of follow-up, grade 2 and more radiation-induced subcutaneous fibrosis (RISCF) was present in 4 patients (3%) without any difference between arm A (n = 2) and B (n = 2), p=0.93. In both arms, all patients that presented a RICSF had a RILA lower than 16%. Sensitivity and specificity were 100% and 39%, respectively.No acute lung toxicities were observed and quality of life was good to excellent for all patients.SNPs analyses are still on-going (Pr Rosenstein, NY). Conclusion: Acute and early late grade 2 dermatitis were similar in both arms. The only factor that influenced RISCF was a low radiation-induced lymphocyte apoptosis yield. We confirmed prospectively the capacity of RILA for identifying hypersensitive patients to radiation. Indeed, patients with RILA superior to 16% did not present late effects to radiation and confirmed the first prospective trial we published in 2005 (Ozsahin et al., Clin Cancer Res).
Resumo:
For the general practitioner to be able to prescribe optimal therapy to his individual hypertensive patients, he needs accurate information on the therapeutic agents he is going to administer and practical treatment strategies. The information on drugs and drug combinations has to be applicable to the treatment of individual patients and not just patient study groups. A basic requirement is knowledge of the dose-response relationship for each compound in order to choose the optimal therapeutic dose. Contrary to general assumption, this key information is difficult to obtain and often not available to the physician for many years after marketing of a drug. As a consequence, excessive doses are often used. Furthermore, the physician needs comparative data on the various antihypertensive drugs that are applicable to the treatment of individual patients. In order to minimize potential side effects due to unnecessary combinations of compounds, the strategy of sequential monotherapy is proposed, with the goal of treating as many patients as possible with monotherapy at optimal doses. More drug trials of a crossover design and more individualized analyses of the results are badly needed to provide the physician with information that he can use in his daily practice. In this time of continuous intensive development of new antihypertensive agents, much could be gained in enhanced efficacy and reduced incidence of side effects by taking a closer look at the drugs already available and using them more appropriately in individual patients.
Resumo:
Improving safety at nighttime work zones is important because of the extra visibility concerns. The deployment of sequential lights is an innovative method for improving driver recognition of lane closures and work zone tapers. Sequential lights are wireless warning lights that flash in a sequence to clearly delineate the taper at work zones. The effectiveness of sequential lights was investigated using controlled field studies. Traffic parameters were collected at the same field site with and without the deployment of sequential lights. Three surrogate performance measures were used to determine the impact of sequential lights on safety. These measures were the speeds of approaching vehicles, the number of late taper merges and the locations where vehicles merged into open lane from the closed lane. In addition, an economic analysis was conducted to monetize the benefits and costs of deploying sequential lights at nighttime work zones. The results of this study indicates that sequential warning lights had a net positive effect in reducing the speeds of approaching vehicles, enhancing driver compliance, and preventing passenger cars, trucks and vehicles at rural work zones from late taper merges. Statistically significant decreases of 2.21 mph mean speed and 1 mph 85% speed resulted with sequential lights. The shift in the cumulative speed distributions to the left (i.e. speed decrease) was also found to be statistically significant using the Mann-Whitney and Kolmogorov-Smirnov tests. But a statistically significant increase of 0.91 mph in the speed standard deviation also resulted with sequential lights. With sequential lights, the percentage of vehicles that merged earlier increased from 53.49% to 65.36%. A benefit-cost ratio of around 5 or 10 resulted from this analysis of Missouri nighttime work zones and historical crash data. The two different benefitcost ratios reflect two different ways of computing labor costs.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.