907 resultados para General Linear Methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: With increasing computer power, simulating the dynamics of complex systems in chemistry and biology is becoming increasingly routine. The modelling of individual reactions in (bio)chemical systems involves a large number of random events that can be simulated by the stochastic simulation algorithm (SSA). The key quantity is the step size, or waiting time, τ, whose value inversely depends on the size of the propensities of the different channel reactions and which needs to be re-evaluated after every firing event. Such a discrete event simulation may be extremely expensive, in particular for stiff systems where τ can be very short due to the fast kinetics of some of the channel reactions. Several alternative methods have been put forward to increase the integration step size. The so-called τ-leap approach takes a larger step size by allowing all the reactions to fire, from a Poisson or Binomial distribution, within that step. Although the expected value for the different species in the reactive system is maintained with respect to more precise methods, the variance at steady state can suffer from large errors as τ grows. Results: In this paper we extend Poisson τ-leap methods to a general class of Runge-Kutta (RK) τ-leap methods. We show that with the proper selection of the coefficients, the variance of the extended τ-leap can be well-behaved, leading to significantly larger step sizes.Conclusions: The benefit of adapting the extended method to the use of RK frameworks is clear in terms of speed of calculation, as the number of evaluations of the Poisson distribution is still one set per time step, as in the original τ-leap method. The approach paves the way to explore new multiscale methods to simulate (bio)chemical systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliable estimates of heavy-truck volumes are important in a number of transportation applications. Estimates of truck volumes are necessary for pavement design and pavement management. Truck volumes are important in traffic safety. The number of trucks on the road also influences roadway capacity and traffic operations. Additionally, heavy vehicles pollute at higher rates than passenger vehicles. Consequently, reliable estimates of heavy-truck vehicle miles traveled (VMT) are important in creating accurate inventories of on-road emissions. This research evaluated three different methods to calculate heavy-truck annual average daily traffic (AADT) which can subsequently be used to estimate vehicle miles traveled (VMT). Traffic data from continuous count stations provided by the Iowa DOT were used to estimate AADT for two different truck groups (single-unit and multi-unit) using the three methods. The first method developed monthly and daily expansion factors for each truck group. The second and third methods created general expansion factors for all vehicles. Accuracy of the three methods was compared using n-fold cross-validation. In n-fold cross-validation, data are split into n partitions, and data from the nth partition are used to validate the remaining data. A comparison of the accuracy of the three methods was made using the estimates of prediction error obtained from cross-validation. The prediction error was determined by averaging the squared error between the estimated AADT and the actual AADT. Overall, the prediction error was the lowest for the method that developed expansion factors separately for the different truck groups for both single- and multi-unit trucks. This indicates that use of expansion factors specific to heavy trucks results in better estimates of AADT, and, subsequently, VMT, than using aggregate expansion factors and applying a percentage of trucks. Monthly, daily, and weekly traffic patterns were also evaluated. Significant variation exists in the temporal and seasonal patterns of heavy trucks as compared to passenger vehicles. This suggests that the use of aggregate expansion factors fails to adequately describe truck travel patterns.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Error-correcting codes and matroids have been widely used in the study of ordinary secret sharing schemes. In this paper, the connections between codes, matroids, and a special class of secret sharing schemes, namely, multiplicative linear secret sharing schemes (LSSSs), are studied. Such schemes are known to enable multiparty computation protocols secure against general (nonthreshold) adversaries.Two open problems related to the complexity of multiplicative LSSSs are considered in this paper. The first one deals with strongly multiplicative LSSSs. As opposed to the case of multiplicative LSSSs, it is not known whether there is an efficient method to transform an LSSS into a strongly multiplicative LSSS for the same access structure with a polynomial increase of the complexity. A property of strongly multiplicative LSSSs that could be useful in solving this problem is proved. Namely, using a suitable generalization of the well-known Berlekamp–Welch decoder, it is shown that all strongly multiplicative LSSSs enable efficient reconstruction of a shared secret in the presence of malicious faults. The second one is to characterize the access structures of ideal multiplicative LSSSs. Specifically, the considered open problem is to determine whether all self-dual vector space access structures are in this situation. By the aforementioned connection, this in fact constitutes an open problem about matroid theory, since it can be restated in terms of representability of identically self-dual matroids by self-dual codes. A new concept is introduced, the flat-partition, that provides a useful classification of identically self-dual matroids. Uniform identically self-dual matroids, which are known to be representable by self-dual codes, form one of the classes. It is proved that this property also holds for the family of matroids that, in a natural way, is the next class in the above classification: the identically self-dual bipartite matroids.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND STUDY AIMS: Appropriate use of colonoscopy is a key component of quality management in gastrointestinal endoscopy. In an update of a 1998 publication, the 2008 European Panel on the Appropriateness of Gastrointestinal Endoscopy (EPAGE II) defined appropriateness criteria for various colonoscopy indications. This introductory paper therefore deals with methodology, general appropriateness, and a review of colonoscopy complications. METHODS:The RAND/UCLA Appropriateness Method was used to evaluate the appropriateness of various diagnostic colonoscopy indications, with 14 multidisciplinary experts using a scale from 1 (extremely inappropriate) to 9 (extremely appropriate). Evidence reported in a comprehensive updated literature review was used for these decisions. Consolidation of the ratings into three appropriateness categories (appropriate, uncertain, inappropriate) was based on the median and the heterogeneity of the votes. The experts then met to discuss areas of disagreement in the light of existing evidence, followed by a second rating round, with a subsequent third voting round on necessity criteria, using much more stringent criteria (i. e. colonoscopy is deemed mandatory). RESULTS: Overall, 463 indications were rated, with 55 %, 16 % and 29 % of them being judged appropriate, uncertain and inappropriate, respectively. Perforation and hemorrhage rates, as reported in 39 studies, were in general < 0.1 % and < 0.3 %, respectively CONCLUSIONS: The updated EPAGE II criteria constitute an aid to clinical decision-making but should in no way replace individual judgment. Detailed panel results are freely available on the internet (www.epage.ch) and will thus constitute a reference source of information for clinicians.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a polyhedral framework for establishing general structural properties on optimal solutions of stochastic scheduling problems, where multiple job classes vie for service resources: the existence of an optimal priority policy in a given family, characterized by a greedoid (whose feasible class subsets may receive higher priority), where optimal priorities are determined by class-ranking indices, under restricted linear performance objectives (partial indexability). This framework extends that of Bertsimas and Niño-Mora (1996), which explained the optimality of priority-index policies under all linear objectives (general indexability). We show that, if performance measures satisfy partial conservation laws (with respect to the greedoid), which extend previous generalized conservation laws, then the problem admits a strong LP relaxation over a so-called extended greedoid polytope, which has strong structural and algorithmic properties. We present an adaptive-greedy algorithm (which extends Klimov's) taking as input the linear objective coefficients, which (1) determines whether the optimal LP solution is achievable by a policy in the given family; and (2) if so, computes a set of class-ranking indices that characterize optimal priority policies in the family. In the special case of project scheduling, we show that, under additional conditions, the optimal indices can be computed separately for each project (index decomposition). We further apply the framework to the important restless bandit model (two-action Markov decision chains), obtaining new index policies, that extend Whittle's (1988), and simple sufficient conditions for their validity. These results highlight the power of polyhedral methods (the so-called achievable region approach) in dynamic and stochastic optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The processing of biological motion is a critical, everyday task performed with remarkable efficiency by human sensory systems. Interest in this ability has focused to a large extent on biological motion processing in the visual modality (see, for example, Cutting, J. E., Moore, C., & Morrison, R. (1988). Masking the motions of human gait. Perception and Psychophysics, 44(4), 339-347). In naturalistic settings, however, it is often the case that biological motion is defined by input to more than one sensory modality. For this reason, here in a series of experiments we investigate behavioural correlates of multisensory, in particular audiovisual, integration in the processing of biological motion cues. More specifically, using a new psychophysical paradigm we investigate the effect of suprathreshold auditory motion on perceptions of visually defined biological motion. Unlike data from previous studies investigating audiovisual integration in linear motion processing [Meyer, G. F. & Wuerger, S. M. (2001). Cross-modal integration of auditory and visual motion signals. Neuroreport, 12(11), 2557-2560; Wuerger, S. M., Hofbauer, M., & Meyer, G. F. (2003). The integration of auditory and motion signals at threshold. Perception and Psychophysics, 65(8), 1188-1196; Alais, D. & Burr, D. (2004). No direction-specific bimodal facilitation for audiovisual motion detection. Cognitive Brain Research, 19, 185-194], we report the existence of direction-selective effects: relative to control (stationary) auditory conditions, auditory motion in the same direction as the visually defined biological motion target increased its detectability, whereas auditory motion in the opposite direction had the inverse effect. Our data suggest these effects do not arise through general shifts in visuo-spatial attention, but instead are a consequence of motion-sensitive, direction-tuned integration mechanisms that are, if not unique to biological visual motion, at least not common to all types of visual motion. Based on these data and evidence from neurophysiological and neuroimaging studies we discuss the neural mechanisms likely to underlie this effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vast territories that have been radioactively contaminated during the 1986 Chernobyl accident provide a substantial data set of radioactive monitoring data, which can be used for the verification and testing of the different spatial estimation (prediction) methods involved in risk assessment studies. Using the Chernobyl data set for such a purpose is motivated by its heterogeneous spatial structure (the data are characterized by large-scale correlations, short-scale variability, spotty features, etc.). The present work is concerned with the application of the Bayesian Maximum Entropy (BME) method to estimate the extent and the magnitude of the radioactive soil contamination by 137Cs due to the Chernobyl fallout. The powerful BME method allows rigorous incorporation of a wide variety of knowledge bases into the spatial estimation procedure leading to informative contamination maps. Exact measurements (?hard? data) are combined with secondary information on local uncertainties (treated as ?soft? data) to generate science-based uncertainty assessment of soil contamination estimates at unsampled locations. BME describes uncertainty in terms of the posterior probability distributions generated across space, whereas no assumption about the underlying distribution is made and non-linear estimators are automatically incorporated. Traditional estimation variances based on the assumption of an underlying Gaussian distribution (analogous, e.g., to the kriging variance) can be derived as a special case of the BME uncertainty analysis. The BME estimates obtained using hard and soft data are compared with the BME estimates obtained using only hard data. The comparison involves both the accuracy of the estimation maps using the exact data and the assessment of the associated uncertainty using repeated measurements. Furthermore, a comparison of the spatial estimation accuracy obtained by the two methods was carried out using a validation data set of hard data. Finally, a separate uncertainty analysis was conducted that evaluated the ability of the posterior probabilities to reproduce the distribution of the raw repeated measurements available in certain populated sites. The analysis provides an illustration of the improvement in mapping accuracy obtained by adding soft data to the existing hard data and, in general, demonstrates that the BME method performs well both in terms of estimation accuracy as well as in terms estimation error assessment, which are both useful features for the Chernobyl fallout study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Urinary creatinine excretion is used as a marker of completeness of timed urine collections, which are a keystone of several metabolic evaluations in clinical investigations and epidemiological surveys. METHODS: We used data from two independent Swiss cross-sectional population-based studies with standardised 24-hour urinary collection and measured anthropometric variables. Only data from adults of European descent, with estimated glomerular filtration rate (eGFR) ≥60 ml/min/1.73 m2 and reported completeness of the urinary collection were retained. A linear regression model was developed to predict centiles of the 24-hour urinary creatinine excretion in 1,137 participants from the Swiss Survey on Salt and validated in 994 participants from the Swiss Kidney Project on Genes in Hypertension. RESULTS: The mean urinary creatinine excretion was 193 ± 41 μmol/kg/24 hours in men and 151 ± 38 μmol/kg/24 hours in women in the Swiss Survey on Salt. The values were inversely correlated with age and body mass index (BMI). CONCLUSIONS: We propose a validated prediction equation for 24-hour urinary creatinine excretion in the general European population, based on readily available variables such as age, sex and BMI, and a few derived normograms to ease its clinical application. This should help healthcare providers to interpret the completeness of a 24-hour urine collection in daily clinical practice and in epidemiological population studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Uveal melanoma exhibits a high incidence of metastases; and, to date, there is no systemic therapy that clearly improves outcomes. The anticytotoxic T-lymphocyte-associated protein 4 (anti-CTLA-4) antibody ipilimumab is a standard of care for metastatic melanoma; however, the clinical activity of CTLA-4 inhibition in patients with metastatic uveal melanoma is poorly defined. METHODS: To assess ipilimumab in this setting, the authors performed a multicenter, retrospective analysis of 4 hospitals in the United States and Europe. Clinical characteristics, toxicities, and radiographic disease burden, as determined by central, blinded radiology review, were evaluated. RESULTS: Thirty-nine patients with uveal melanoma were identified, including 34 patients who received 3 mg/kg ipilimumab and 5 who received 10 mg/kg ipilimumab. Immune-related response criteria and modified World Health Organization criteria were used to assess the response rate (RR) and the combined response plus stable disease (SD) rate after 12 weeks, after 23 weeks, and overall (median follow-up, 50.4 weeks [12.6 months]). At week 12, the RR was 2.6%, and the response plus SD rate was 46.%; at week 23, the RR was 2.6%, and the response plus SD rate was 28.2%. There was 1 complete response and 1 late partial response (at 100 weeks after initial SD) for an immune-related RR of 5.1%. Immune-related adverse events were observed in 28 patients (71.8%) and included 7 (17.9%) grade 3 and 4 events. Immune-related adverse events were more frequent in patients who received 10 mg/kg ipilimumab than in those who received 3 mg/kg ipilimumab. The median overall survival from the first dose of ipilimumab was 9.6 months (95% confidence interval, 6.3-13.4 months; range, 1.6-41.6 months). Performance status, lactate dehydrogenase level, and an absolute lymphocyte count ≥ 1000 cells/μL at week 7 were associated significantly with survival. CONCLUSIONS: In this multicenter, retrospective analysis of 4 hospitals in the United States and Europe of patients with uveal melanoma, durable responses to ipilimumab and manageable toxicity were observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: Delirium is highly prevalent in general hospitals but remains underrecognized and undertreated despite its association with increased morbidity, mortality, and health services utilization. To enhance its management, we developed guidelines covering all aspects, from risk factor identification to preventive, diagnostic, and therapeutic interventions in adult patients. METHODS: Guidelines, systematic reviews, randomized controlled trials (RCT), and cohort studies were systematically searched and evaluated. Based on a synthesis of retrieved high-quality documents, recommendation items were submitted to a multidisciplinary expert panel. Experts scored the appropriateness of recommendation items, using an evidence-based, explicit, multidisciplinary panel approach. Each recommendation was graded according to this process' results. RESULTS: Rated recommendations were mostly supported by a low level of evidence (1.3% RCT and systematic reviews, 14.3% nonrandomized trials vs. 84.4% observational studies or expert opinions). Nevertheless, 71.1% of recommendations were considered appropriate by the experts. Prevention of delirium and its nonpharmacological management should be fostered. Haloperidol remains the first-choice drug, whereas the role of atypical antipsychotics is still uncertain. CONCLUSIONS: While many topics addressed in these guidelines have not yet been adequately studied, an explicit panel and evidence-based approach allowed the proposal of comprehensive recommendations for the prevention and management of delirium in general hospitals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The achievable region approach seeks solutions to stochastic optimisation problems by: (i) characterising the space of all possible performances(the achievable region) of the system of interest, and (ii) optimisingthe overall system-wide performance objective over this space. This isradically different from conventional formulations based on dynamicprogramming. The approach is explained with reference to a simpletwo-class queueing system. Powerful new methodologies due to the authorsand co-workers are deployed to analyse a general multiclass queueingsystem with parallel servers and then to develop an approach to optimalload distribution across a network of interconnected stations. Finally,the approach is used for the first time to analyse a class of intensitycontrol problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and Aims: normal weight obesity (NWO) has been defined as an excessive body fat (BF) associated with a normal body mass index (BMI). Little is known regarding its prevalence in the general population or which cut-offs for BF should be used. Methods: convenience sample of 1,523 Portuguese adults. BF was measured by validated hand-held bioimpedance. NWO was defined as a BMI<25 kg/m2 and a %BF mass>30%, along other published criteria. Results: prevalence of NWO was 10.1% in women and 3.2% in men. In women, prevalence of NWO increased considerably with age, and virtually all women aged over 55 with a BMI<25 kg/m2 were actually considered as NWO. Using gender specific cut-offs for BF (29.1% in men and 37.2% in women) led to moderately lower of NWO in women. Using gender- and age-specific cut-points for %BF considerably decreased the prevalence of NWO in women (0.5 to 2.5% depending on the criterion) but not in men (1.9 to 3.4%). Conclusions: gender- and age- specific or at least gender-specific, instead of single cut-offs for %BF, should be used to characterize and study NWO.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sequential randomized prediction of an arbitrary binary sequence isinvestigated. No assumption is made on the mechanism of generating the bit sequence. The goal of the predictor is to minimize its relative loss, i.e., to make (almost) as few mistakes as the best ``expert'' in a fixed, possibly infinite, set of experts. We point out a surprising connection between this prediction problem and empirical process theory. First, in the special case of static (memoryless) experts, we completely characterize the minimax relative loss in terms of the maximum of an associated Rademacher process. Then we show general upper and lower bounds on the minimaxrelative loss in terms of the geometry of the class of experts. As main examples, we determine the exact order of magnitude of the minimax relative loss for the class of autoregressive linear predictors and for the class of Markov experts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents an application of the multilevel analysis techniques tothe study of the abstention in the 2000 Spanish general election. Theinterest of the study is both, substantive and methodological. From thesubstantive point of view the article intends to explain the causes ofabstention and analyze the impact of associationism on it. From themethodological point of view it is intended to analyze the interaction betweenindividual and context with a modelisation that takes into account thehierarchical structure of data. The multilevel study of this paper validatesthe one level results obtained in previous analysis of the abstention andshows that only a fraction of the differences in abstention are explained bythe individual characteristics of the electors. Another important fraction ofthese differences is due to the political and social characteristics of thecontext. Relating to associationism, the data suggest that individualparticipation in associations decrease the probability of abstention. However,better indicators are needed in order to catch more properly the effect ofassociationism in electoral behaviour.