792 resultados para Expected-utility
Resumo:
Esophageal and gastroesophageal junction (GEJ) adenocarcinoma is rapidly increasing disease with a pathophysiology connected to oxidative stress. Exact pre-treatment clinical staging is essential for optimal care of this lethal malignancy. The cost-effectiviness of treatment is increasingly important. We measured oxidative metabolism in the distal and proximal esophagus by myeloperoxidase activity (MPA), glutathione content (GSH), and superoxide dismutase (SOD) in 20 patients operated on with Nissen fundoplication and 9 controls during a 4-year follow-up. Further, we assessed the oxidative damage of DNA by 8-hydroxydeoxyguanosine (8-OHdG) in esophageal samples of subjects (13 Barrett s metaplasia, 6 Barrett s esophagus with high-grade dysplasia, 18 adenocarcinoma of the distal esophagus/GEJ, and 14 normal controls). We estimated the accuracy (42 patients) and preoperative prognostic value (55 patients) of PET compared with computed tomography (CT) and endoscopic ultrasound (EUS) in patients with adenocarcinoma of the esophagus/GEJ. Finally, we clarified the specialty-related costs and the utility of either radical (30 patients) or palliative (23 patients) treatment of esophageal/GEJ carcinoma by the 15 D health-related quality-of-life (HRQoL) questionnaire and the survival rate. The cost-utility of radical treatment of esophageal/GEJ carcinoma was investigated using a decision tree analysis model comparing radical, palliative, and hypothetical new treatment. We found elevated oxidative stress ( measured by MPA) and decreased antioxidant defense (measured by GSH) after antireflux surgery. This indicates that antireflux surgery is not a perfect solution for oxidative stress of the esophageal mucosa. Elevated oxidative stress in turn may partly explain why adenocarcinoma of the distal esophagus is found even after successful fundoplication. In GERD patients, proximal esophageal mucosal anti-oxidative defense seems to be defective before and even years after successful antireflux surgery. In addition, antireflux surgery apparently does not change the level of oxidative stress in the proximal esophagus, suggesting that defective mucosal anti-oxidative capacity plays a role in development of oxidative damage to the esophageal mucosa in GERD. In the malignant transformation of Barrett s esophagus an important component appears to be oxidative stress. DNA damage may be mediated by 8-OHdG, which we found to be increased in Barrett s epithelium and in high-grade dysplasia as well as in adenocarcinoma of the esophagus/GEJ compared with controls. The entire esophagus of Barrett s patients suffers from increased oxidative stress ( measured by 8-OhdG). PET is a useful tool in the staging and prognostication of adenocarcinoma of the esophagus/GEJ detecting organ metastases better than CT, although its accuracy in staging of paratumoral and distant lymph nodes is limited. Radical surgery for esophageal/GEJ carcinoma provides the greatest benefit in terms of survival, and its cost-utility appears to be the best of currently available treatments.
Resumo:
A detailed study is presented of the expected performance of the ATLAS detector. The reconstruction of tracks, leptons, photons, missing energy and jets is investigated, together with the performance of b-tagging and the trigger. The physics potential for a variety of interesting physics processes, within the Standard Model and beyond, is examined. The study comprises a series of notes based on simulations of the detector and physics processes, with particular emphasis given to the data expected from the first years of operation of the LHC at CERN.
Resumo:
The structure of the mitotic chromosomes of Allium cepa has been elucidated by controlling the temperature and time of exposure of fresh roots to stain fixatives. The details seen in material stained in N HCl-orcein for 8 min. at 60° C. and squashed after varying intervals of storage at room temperature were essentially similar to pictures obtained with 1% aceto-orcein and 1% aceto-orcein-N HCl (10:1) under identical conditions of handling. The chromosomes appear quadri-partite at metaphase and bi-partite at anaphase. A rare instance of the precocious assumption of a quadri-partite condition by two anaphase chromosomes is illustrated. Caduceus coiling of chromonemata was seen in chromosome bridges also. Chromosomes have material easily dissociable from the chromonemata and their removal does not affect the structural integrity of the chromosome.
Resumo:
The move towards IT outsourcing is the first step towards an environment where compute infrastructure is treated as a service. In utility computing this IT service has to honor Service Level Agreements (SLA) in order to meet the desired Quality of Service (QoS) guarantees. Such an environment requires reliable services in order to maximize the utilization of the resources and to decrease the Total Cost of Ownership (TCO). Such reliability cannot come at the cost of resource duplication, since it increases the TCO of the data center and hence the cost per compute unit. We, in this paper, look into aspects of projecting impact of hardware failures on the SLAs and techniques required to take proactive recovery steps in case of a predicted failure. By maintaining health vectors of all hardware and system resources, we predict the failure probability of resources based on observed hardware errors/failure events, at runtime. This inturn influences an availability aware middleware to take proactive action (even before the application is affected in case the system and the application have low recoverability). The proposed framework has been prototyped on a system running HP-UX. Our offline analysis of the prediction system on hardware error logs indicate no more than 10% false positives. This work to the best of our knowledge is the first of its kind to perform an end-to-end analysis of the impact of a hardware fault on application SLAs, in a live system.
Resumo:
Background:Bacterial non-coding small RNAs (sRNAs) have attracted considerable attention due to their ubiquitous nature and contribution to numerous cellular processes including survival, adaptation and pathogenesis. Existing computational approaches for identifying bacterial sRNAs demonstrate varying levels of success and there remains considerable room for improvement. Methodology/Principal Findings: Here we have proposed a transcriptional signal-based computational method to identify intergenic sRNA transcriptional units (TUs) in completely sequenced bacterial genomes. Our sRNAscanner tool uses position weight matrices derived from experimentally defined E. coli K-12 MG1655 sRNA promoter and rho-independent terminator signals to identify intergenic sRNA TUs through sliding window based genome scans. Analysis of genomes representative of twelve species suggested that sRNAscanner demonstrated equivalent sensitivity to sRNAPredict2, the best performing bioinformatics tool available presently. However, each algorithm yielded substantial numbers of known and uncharacterized hits that were unique to one or the other tool only. sRNAscanner identified 118 novel putative intergenic sRNA genes in Salmonella enterica Typhimurium LT2, none of which were flagged by sRNAPredict2. Candidate sRNA locations were compared with available deep sequencing libraries derived from Hfq-co-immunoprecipitated RNA purified from a second Typhimurium strain (Sittka et al. (2008) PLoS Genetics 4: e1000163). Sixteen potential novel sRNAs computationally predicted and detected in deep sequencing libraries were selected for experimental validation by Northern analysis using total RNA isolated from bacteria grown under eleven different growth conditions. RNA bands of expected sizes were detected in Northern blots for six of the examined candidates. Furthermore, the 5'-ends of these six Northern-supported sRNA candidates were successfully mapped using 5'-RACE analysis. Conclusions/Significance: We have developed, computationally examined and experimentally validated the sRNAscanner algorithm. Data derived from this study has successfully identified six novel S. Typhimurium sRNA genes. In addition, the computational specificity analysis we have undertaken suggests that similar to 40% of sRNAscanner hits with high cumulative sum of scores represent genuine, undiscovered sRNA genes. Collectively, these data strongly support the utility of sRNAscanner and offer a glimpse of its potential to reveal large numbers of sRNA genes that have to date defied identification. sRNAscanner is available from: http://bicmku.in:8081/sRNAscanner or http://cluster.physics.iisc.ernet.in/sRNAscanner/.
Resumo:
The study presents a theory of utility models based on aspiration levels, as well as the application of this theory to the planning of timber flow economics. The first part of the study comprises a derivation of the utility-theoretic basis for the application of aspiration levels. Two basic models are dealt with: the additive and the multiplicative. Applied here solely for partial utility functions, aspiration and reservation levels are interpreted as defining piecewisely linear functions. The standpoint of the choices of the decision-maker is emphasized by the use of indifference curves. The second part of the study introduces a model for the management of timber flows. The model is based on the assumption that the decision-maker is willing to specify a shape of income flow which is different from that of the capital-theoretic optimum. The utility model comprises four aspiration-based compound utility functions. The theory and the flow model are tested numerically by computations covering three forest holdings. The results show that the additive model is sensitive even to slight changes in relative importances and aspiration levels. This applies particularly to nearly linear production possibility boundaries of monetary variables. The multiplicative model, on the other hand, is stable because it generates strictly convex indifference curves. Due to a higher marginal rate of substitution, the multiplicative model implies a stronger dependence on forest management than the additive function. For income trajectory optimization, a method utilizing an income trajectory index is more efficient than one based on the use of aspiration levels per management period. Smooth trajectories can be attained by squaring the deviations of the feasible trajectories from the desired one.
Resumo:
It is well known that in the time-domain acquisition of NMR data, signal-to-noise (S/N) improves as the square root of the number of transients accumulated. However, the amplitude of the measured signal varies during the time of detection, having a functional form dependent on the coherence detected. Matching the time spent signal averaging to the expected amplitude of the signal observed should also improve the detected signal-to-noise. Following this reasoning, Barna et al. (J Magn. Reson.75, 384, 1987) demonstrated the utility of exponential sampling in one- and two-dimensional NMR, using maximum-entropy methods to analyze the data. It is proposed here that for two-dimensional experiments the exponential sampling be replaced by exponential averaging. The data thus collected can be analyzed by standard fast-Fourier-transform routines. We demonstrate the utility of exponential averaging in 2D NOESY spectra of the protein ubiquitin, in which an enhanced SIN is observed. It is also shown that the method acquires delayed double-quantum-filtered COSY without phase distortion.
Resumo:
Novel one and two dimensional NMR techniques are proposed and utilized for the determination of the signs of the order parameters used for the study of the mobility of the fatty acid chains. The experiments designed to extract this information involve the use of the intensities of the side bands in the spectra of oriented systems spinning at the magic angle. Advantages of the two dimensional technique over the one dimensional method are discussed. The utility of the method in the study of the dynamic properties of membranes and model systems is pointed out.
Resumo:
This dissertation investigates the atomic power solution in Finland between 1955 - 1970. During these years a national arrangement for atomic energy technology evolved. The foundations of the Finnish atomic energy policy; the creation of basic legislation and the first governmental bodies, were laid between 1955 - 1965. In the late 1960's, the necessary technological and political decisions were made in order to purchase the first commercial nuclear reactor. A historical narration of this process is seen in the international context of "atoms for peace" policies and Cold War history in general. The geopolitical position of Finland made it necessary to become involved in the balanced participation in international scientific-technical exchange and assistive nuclear programs. The Paris Peace Treaty of 1947 categorically denied Finland acquisition of nuclear weapons. Accordingly, from the "Geneva year" of 1955, the emphasis was placed on peaceful purposes for atomic energy as well as on the education of national professionals in Finland. An initiative for the governmental atomic energy commission came from academia but the ultimate motive behind it was an anticipated structural change in the supply of national energy. Economically exploitable hydro power resources were expected to be built within ten years and atomic power was seen as a promising and complementing new energy technology. While importing fuels like coal was out of the question, because of scarce foreign currency, domestic uranium mineral deposits were considered as a potential source of nuclear fuel. Nevertheless, even then nuclear energy was regarded as just one of the possible future energy options. In the mid-1960 s a bandwagon effect of light water reactor orders was witnessed in the United States and soon elsewhere in the world. In Finland, two separate invitations for bids for nuclear reactors were initiated. This study explores at length both their preceding grounds and later phases. An explanation is given that the parallel, independent and nearly identical tenders reflected a post-war ideological rivalry between the state-owned utility Imatran Voima and private energy utilities. A private sector nuclear power association Voimayhdistys Ydin represented energy intensive paper and pulp industries and wanted to have free choice instead of being associated themselves with "the state monopoly" in energy pricing. As a background to this, a decisive change had started to happen within Finnish energy policy: private and municipal big thermal power plants became incorporated into the national hydro power production system. A characteristic phenomenon in the later history is the Soviet Union s effort to bid for the tender of Imatran Voima. A nuclear superpower was willing to take part in competition but not on a turnkey basis as Imatran Voima had presumed. As a result of many political turns and four years of negotiations the first Finnish commercial light water reactor was ordered from the East. Soon after this the private nuclear power group ordered its reactors from Sweden. This work interprets this as a reasonable geopolitical balance in choosing politically sensitive technology. Conceptually, social and political dimensions of new technology are emphasised. Negotiations on the Finnish atomic energy program are viewed as a cooperation and a struggle, where state-oriented and private-oriented regimes pose their own macro level views and goals (technopolitical imaginaries) and defend and advance their plans and practical modes of action (schemata). Here, not only technologists but even political actors are seen to contribute to technopolitical realisations.
Resumo:
Bioconversion of acyclic isoprenoids using a strain of Aspergillus niger results in hydroxylated metabolites with regio- and stereoselectivity. The organism carries out oxidation of the terminal allylic methyl group and the remote double bond in all the compounds tested (I-VII). However, these two activities seem to have preferential structural requirements. When an acyclic isoprenoid with a ketone functionality such as geranylacetone is used as the substrate, the organism also carries out the asymmetric reduction of the keto group. All the metabolites formed have been purified and characterized by conventional spectroscopic methods and quantification has been made by gas chromatographic analyses.
Resumo:
Cross-polarization from the dipolar reservoir for a range of mismatched Hartmann-Hahn conditions has been considered. Experiment, in general, agrees with the dispersive Lorentzian behavior expected on the basis of quasi-equilibrium theory. It is observed that inclusion of additional mechanisms of polarization transfer lead to an improvment of the fit of the experimental results. The utility of extending the technique to the case of ordered long chain molecules, such as liquid crystals, for the measurement of the local dipolar field is also presented. (C) 2002 Elsevier Science (USA).
Resumo:
The use of an instrumented impact test set-up to evaluate the influence of water ingress on the impact response of a carbon–epoxy (C–E) laminated composite system containing discontinuous buffer strips (BS) has been examined. The data on the BS-free C–E sample in dry conditions are used as reference to compare with the data derived from those immersed in water. The work demonstrated the utility of an instrumented impact test set-up in characterising the response, first owing to the architectural difference due to introduction of buffer strips and then due to the presence of an additional phase in the form of water ingressed into the sample. The presence of water was found to enhance the energy absorption characteristics of the C–E system with BS insertions. It was also noticed that with an increasing number of BS layer insertions, the load–time plots displayed characteristic changes. The ductility indices (DI) were found to display a lower value for the water immersed samples compared to the dry ones.
Resumo:
We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.