943 resultados para Approximate Bayesian computation, Posterior distribution, Quantile distribution, Response time data


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Low cardiac output impairs the hepatic arterial buffer response (HABR). Whether this is due to low abdominal blood flow per se is not known. Dobutamine is commonly used to increase cardiac output, and it may further modify hepatosplanchnic and renal vasoregulation. We assessed the effects of isolated abdominal aortic blood flow changes and dobutamine on hepatosplanchnic and renal blood flow. Twenty-five anesthetized pigs with an abdominal aorto-aortic shunt were randomized to 2 control groups [zero (n = 6) and minimal (n = 6) shunt flow], and 2 groups with 50% reduction of abdominal blood flow and either subsequent increased abdominal blood flow by shunt reduction (n = 6) or dobutamine infusion at 5 and 10 microg kg(-1) min(-1) with constant shunt flow (n = 7). Regional (ultrasound) and local (laser Doppler) intra-abdominal blood flows were measured. The HABR was assessed during acute portal vein occlusion. Sustained low abdominal blood flow, by means of shunt activation, decreased liver, gut, and kidney blood flow similarly and reduced local microcirculatory blood flow in the jejunum. Shunt flow reduction partially restored regional blood flows but not jejunal microcirculatory blood flow. Low-but not high-dose dobutamine increased gut and celiac trunk flow whereas hepatic artery and renal blood flows remained unchanged. Neither intervention altered local blood flows. The HABR was not abolished during sustained low abdominal blood flow despite substantially reduced hepatic arterial blood flow and was not modified by dobutamine. Low-but not high-dose dobutamine redistributes blood flow toward the gut and celiac trunk. The jejunal microcirculatory flow, once impaired, is difficult to restore.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The construction of a reliable, practically useful prediction rule for future response is heavily dependent on the "adequacy" of the fitted regression model. In this article, we consider the absolute prediction error, the expected value of the absolute difference between the future and predicted responses, as the model evaluation criterion. This prediction error is easier to interpret than the average squared error and is equivalent to the mis-classification error for the binary outcome. We show that the distributions of the apparent error and its cross-validation counterparts are approximately normal even under a misspecified fitted model. When the prediction rule is "unsmooth", the variance of the above normal distribution can be estimated well via a perturbation-resampling method. We also show how to approximate the distribution of the difference of the estimated prediction errors from two competing models. With two real examples, we demonstrate that the resulting interval estimates for prediction errors provide much more information about model adequacy than the point estimates alone.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many phase II clinical studies in oncology use two-stage frequentist design such as Simon's optimal design. However, they have a common logistical problem regarding the patient accrual at the interim. Strictly speaking, patient accrual at the end of the first stage may have to be suspended until all patients have events, success or failure. For example, when the study endpoint is six-month progression free survival, patient accrual has to be stopped until all outcomes from stage I is observed. However, study investigators may have concern when accrual is suspended after the first stage due to the loss of accrual momentum during this hiatus. We propose a two-stage phase II design that resolves the patient accrual problem due to an interim analysis, and it can be used as an alternative way to frequentist two-stage phase II studies in oncology. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nowadays computing platforms consist of a very large number of components that require to be supplied with diferent voltage levels and power requirements. Even a very small platform, like a handheld computer, may contain more than twenty diferent loads and voltage regulators. The power delivery designers of these systems are required to provide, in a very short time, the right power architecture that optimizes the performance, meets electrical specifications plus cost and size targets. The appropriate selection of the architecture and converters directly defines the performance of a given solution. Therefore, the designer needs to be able to evaluate a significant number of options in order to know with good certainty whether the selected solutions meet the size, energy eficiency and cost targets. The design dificulties of selecting the right solution arise due to the wide range of power conversion products provided by diferent manufacturers. These products range from discrete components (to build converters) to complete power conversion modules that employ diferent manufacturing technologies. Consequently, in most cases it is not possible to analyze all the alternatives (combinations of power architectures and converters) that can be built. The designer has to select a limited number of converters in order to simplify the analysis. In this thesis, in order to overcome the mentioned dificulties, a new design methodology for power supply systems is proposed. This methodology integrates evolutionary computation techniques in order to make possible analyzing a large number of possibilities. This exhaustive analysis helps the designer to quickly define a set of feasible solutions and select the best trade-off in performance according to each application. The proposed approach consists of two key steps, one for the automatic generation of architectures and other for the optimized selection of components. In this thesis are detailed the implementation of these two steps. The usefulness of the methodology is corroborated by contrasting the results using real problems and experiments designed to test the limits of the algorithms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent decades, there has been an increasing interest in systems comprised of several autonomous mobile robots, and as a result, there has been a substantial amount of development in the eld of Articial Intelligence, especially in Robotics. There are several studies in the literature by some researchers from the scientic community that focus on the creation of intelligent machines and devices capable to imitate the functions and movements of living beings. Multi-Robot Systems (MRS) can often deal with tasks that are dicult, if not impossible, to be accomplished by a single robot. In the context of MRS, one of the main challenges is the need to control, coordinate and synchronize the operation of multiple robots to perform a specic task. This requires the development of new strategies and methods which allow us to obtain the desired system behavior in a formal and concise way. This PhD thesis aims to study the coordination of multi-robot systems, in particular, addresses the problem of the distribution of heterogeneous multi-tasks. The main interest in these systems is to understand how from simple rules inspired by the division of labor in social insects, a group of robots can perform tasks in an organized and coordinated way. We are mainly interested on truly distributed or decentralized solutions in which the robots themselves, autonomously and in an individual manner, select a particular task so that all tasks are optimally distributed. In general, to perform the multi-tasks distribution among a team of robots, they have to synchronize their actions and exchange information. Under this approach we can speak of multi-tasks selection instead of multi-tasks assignment, which means, that the agents or robots select the tasks instead of being assigned a task by a central controller. The key element in these algorithms is the estimation ix of the stimuli and the adaptive update of the thresholds. This means that each robot performs this estimate locally depending on the load or the number of pending tasks to be performed. In addition, it is very interesting the evaluation of the results in function in each approach, comparing the results obtained by the introducing noise in the number of pending loads, with the purpose of simulate the robot's error in estimating the real number of pending tasks. The main contribution of this thesis can be found in the approach based on self-organization and division of labor in social insects. An experimental scenario for the coordination problem among multiple robots, the robustness of the approaches and the generation of dynamic tasks have been presented and discussed. The particular issues studied are: Threshold models: It presents the experiments conducted to test the response threshold model with the objective to analyze the system performance index, for the problem of the distribution of heterogeneous multitasks in multi-robot systems; also has been introduced additive noise in the number of pending loads and has been generated dynamic tasks over time. Learning automata methods: It describes the experiments to test the learning automata-based probabilistic algorithms. The approach was tested to evaluate the system performance index with additive noise and with dynamic tasks generation for the same problem of the distribution of heterogeneous multi-tasks in multi-robot systems. Ant colony optimization: The goal of the experiments presented is to test the ant colony optimization-based deterministic algorithms, to achieve the distribution of heterogeneous multi-tasks in multi-robot systems. In the experiments performed, the system performance index is evaluated by introducing additive noise and dynamic tasks generation over time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-selection of heterogeneous specialized tasks by autonomous robots. In this paper we focus on a specifically distributed or decentralized approach as we are particularly interested in a decentralized solution where the robots themselves autonomously and in an individual manner, are responsible for selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario to solve the corresponding multi-task distribution problem and we propose a solution using two different approaches by applying Response Threshold Models as well as Learning Automata-based probabilistic algorithms. We have evaluated the robustness of the algorithms, perturbing the number of pending loads to simulate the robot’s error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Prediction at ungauged sites is essential for water resources planning and management. Ungauged sites have no observations about the magnitude of floods, but some site and basin characteristics are known. Regression models relate physiographic and climatic basin characteristics to flood quantiles, which can be estimated from observed data at gauged sites. However, these models assume linear relationships between variables Prediction intervals are estimated by the variance of the residuals in the estimated model. Furthermore, the effect of the uncertainties in the explanatory variables on the dependent variable cannot be assessed. This paper presents a methodology to propagate the uncertainties that arise in the process of predicting flood quantiles at ungauged basins by a regression model. In addition, Bayesian networks were explored as a feasible tool for predicting flood quantiles at ungauged sites. Bayesian networks benefit from taking into account uncertainties thanks to their probabilistic nature. They are able to capture non-linear relationships between variables and they give a probability distribution of discharges as result. The methodology was applied to a case study in the Tagus basin in Spain.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The distribution of S to sulfate, glucosinolates, glutathione, and the insoluble fraction within oilseed rape (Brassica napus L.) leaves of different ages was investigated during vegetative growth. The concentrations of glutathione and glucosinolates increased from the oldest to the youngest leaves, whereas the opposite was observed for SO42−. The concentration of insoluble S was similar among all of the leaves. At sufficient S supply and in the youngest leaves, 2% of total S was allocated to glutathione, 6% to glucosinolates, 50% to the insoluble fraction, and the remainder accumulated as SO42−. In the middle and oldest leaves, 70% to 90% of total S accumulated as SO42−, whereas glutathione and glucosinolates together accounted for less than 1% of S. When the S supply was withdrawn (minus S), the concentrations of all S-containing compounds, particularly SO42−, decreased in the youngest and middle leaves. Neither glucosinolates nor glutathione were major sources of S during S deficiency. Plants grown on nutrient solution containing minus S and low N were less deficient than plants grown on solution containing minus S and high N. The effect of N was explained by differences in growth rate. The different responses of leaves of different ages to S deficiency have to be taken into account for the development of field diagnostic tests to determine whether plants are S deficient.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Jasmonic acid (JA) is a naturally occurring growth regulator found in higher plants. Several physiological roles have been described for this compound (or a related compound, methyl jasmonate) during plant development and in response to biotic and abiotic stress. To accurately determine JA levels in plant tissue, we have synthesized JA containing 13C for use as an internal standard with an isotopic composition of [225]:[224] 0.98:0.02 compared with [225]:[224] 0.15:0.85 for natural material. GC analysis (flame ionization detection and MS) indicate that the internal standard is composed of 92% 2-(+/-)-[13C]JA and 8% 2-(+/-)-7-iso-[13C]JA. In soybean plants, JA levels were highest in young leaves, flowers, and fruit (highest in the pericarp). In soybean seeds and seedlings, JA levels were highest in the youngest organs including the hypocotyl hook, plumule, and 12-h axis. In soybean leaves that had been dehydrated to cause a 15% decrease in fresh weight, JA levels increased approximately 5-fold within 2 h and declined to approximately control levels by 4 h. In contrast, a lag time of 1-2 h occurred before abscisic acid accumulation reached a maximum. These results will be discussed in the context of multiple pathways for JA biosynthesis and the role of JA in plant development and responses to environmental signals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aims: Previous data suggest heterogeneity in laminar distribution of the pathology in the molecular disorder frontotemporal lobar degeneration (FTLD) with transactive response (TAR) DNA-binding protein of 43kDa (TDP-43) proteinopathy (FTLD-TDP). To study this heterogeneity, we quantified the changes in density across the cortical laminae of neuronal cytoplasmic inclusions, glial inclusions, neuronal intranuclear inclusions, dystrophic neurites, surviving neurones, abnormally enlarged neurones, and vacuoles in regions of the frontal and temporal lobe. Methods: Changes in density of histological features across cortical gyri were studied in 10 sporadic cases of FTLD-TDP using quantitative methods and polynomial curve fitting. Results: Our data suggest that laminar neuropathology in sporadic FTLD-TDP is highly variable. Most commonly, neuronal cytoplasmic inclusions, dystrophic neurites and vacuolation were abundant in the upper laminae and glial inclusions, neuronal intranuclear inclusions, abnormally enlarged neurones, and glial cell nuclei in the lower laminae. TDP-43-immunoreactive inclusions affected more of the cortical profile in longer duration cases; their distribution varied with disease subtype, but was unrelated to Braak tangle score. Different TDP-43-immunoreactive inclusions were not spatially correlated. Conclusions: Laminar distribution of pathological features in 10 sporadic cases of FTLD-TDP is heterogeneous and may be accounted for, in part, by disease subtype and disease duration. In addition, the feedforward and feedback cortico-cortical connections may be compromised in FTLD-TDP. © 2012 The Authors. Neuropathology and Applied Neurobiology © 2012 British Neuropathological Society.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The inverse controller is traditionally assumed to be a deterministic function. This paper presents a pedagogical methodology for estimating the stochastic model of the inverse controller. The proposed method is based on Bayes' theorem. Using Bayes' rule to obtain the stochastic model of the inverse controller allows the use of knowledge of uncertainty from both the inverse and the forward model in estimating the optimal control signal. The paper presents the methodology for general nonlinear systems and is demonstrated on nonlinear single-input-single-output (SISO) and multiple-input-multiple-output (MIMO) examples. © 2006 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62E16, 65C05, 65C20.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

From 1992 to 2012 4.4 billion people were affected by disasters with almost 2 trillion USD in damages and 1.3 million people killed worldwide. The increasing threat of disasters stresses the need to provide solutions for the challenges faced by disaster managers, such as the logistical deployment of resources required to provide relief to victims. The location of emergency facilities, stock prepositioning, evacuation, inventory management, resource allocation, and relief distribution have been identified to directly impact the relief provided to victims during the disaster. Managing appropriately these factors is critical to reduce suffering. Disaster management commonly attracts several organisations working alongside each other and sharing resources to cope with the emergency. Coordinating these agencies is a complex task but there is little research considering multiple organisations, and none actually optimising the number of actors required to avoid shortages and convergence. The aim of the this research is to develop a system for disaster management based on a combination of optimisation techniques and geographical information systems (GIS) to aid multi-organisational decision-making. An integrated decision system was created comprising a cartographic model implemented in GIS to discard floodable facilities, combined with two models focused on optimising the decisions regarding location of emergency facilities, stock prepositioning, the allocation of resources and relief distribution, along with the number of actors required to perform these activities. Three in-depth case studies in Mexico were studied gathering information from different organisations. The cartographic model proved to reduce the risk to select unsuitable facilities. The preparedness and response models showed the capacity to optimise the decisions and the number of organisations required for logistical activities, pointing towards an excess of actors involved in all cases. The system as a whole demonstrated its capacity to provide integrated support for disaster preparedness and response, along with the existence of room for improvement for Mexican organisations in flood management.