2 resultados para alternative p-values

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introduction Seizures are harmful to the neonatal brain; this compels many clinicians and researchers to persevere further in optimizing every aspects of managing neonatal seizures. Aims To delineate the seizure profile between non-cooled versus cooled neonates with hypoxic-ischaemic encephalopathy (HIE), in neonates with stroke, the response of seizure burden to phenobarbitone and to quantify the degree of electroclinical dissociation (ECD) of seizures. Methods The multichannel video-EEG was used in this research study as the gold standard to detect seizures, allowing accurate quantification of seizure burden to be ascertained in term neonates. The entire EEG recording for each neonate was independently reviewed by at least 1 experienced neurophysiologist. Data were expressed in medians and interquartile ranges. Linear mixed models results were presented as mean (95% confidence interval); p values <0.05 were deemed as significant. Results Seizure burden in cooled neonates was lower than in non-cooled neonates [60(39-224) vs 203(141-406) minutes; p=0.027]. Seizure burden was reduced in cooled neonates with moderate HIE [49(26-89) vs 162(97-262) minutes; p=0.020] when compared with severe HIE. In neonates with stroke, the background pattern showed suppression over the infarcted side and seizures demonstrated a characteristic pattern. Compared with 10 mg/kg, phenobarbitone doses at 20 mg/kg reduced seizure burden (p=0.004). Seizure burden was reduced within 1 hour of phenobarbitone administration [mean (95% confidence interval): -14(-20 to -8) minutes/hour; p<0.001], but seizures returned to pre-treatment levels within 4 hours (p=0.064). The ECD index in cooled, non-cooled neonates with HIE, stroke and in neonates with other diagnoses were 88%, 94%, 64% and 75% respectively. Conclusions Further research exploring the treatment effects on seizure burden in the neonatal brain is required. A change to our current treatment strategy is warranted as we continue to strive for more effective seizure control, anchored with use of the multichannel EEG as the surveillance tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.