915 resultados para simultaneous inference
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Introduction. Volvulus of transverse colon is rare when compared to cecal and sigmoid volvulus. Cases involving simultaneous volvulus of the transverse colon and another colonic segment are extremely rare. Case report. We report a rare case of simultaneous sigmoid and transverse colon volvulus in a 82-year-old Caucasian female Conclusion. Volvulus is a well recognized cause of large bowel obstruction. The development of transverse and sigmoid volvulus in the same patient is extremely rare. Though rare this possibility must always be considered in the differential diagnosis, when dealing with recurrent intermittent abdominal pain or acute intestinal obstruction.
Resumo:
We report a case of a 24-year-old woman who was delivered via cesarean section at 39 weeks and presented in the puerperium with symptoms of worsening abdominal pain and septicaemia. Preoperative ultrasonography suggested the presence of a pelvic collection. Explorative laparotomy revealed the simultaneous presence of Meckel's diverticulitis and appendicitis without bowel perforation. The patient made an uneventful recovery following small bowel resection with end to end reanastomosis and appendicectomy.
Resumo:
Causal inference with a continuous treatment is a relatively under-explored problem. In this dissertation, we adopt the potential outcomes framework. Potential outcomes are responses that would be seen for a unit under all possible treatments. In an observational study where the treatment is continuous, the potential outcomes are an uncountably infinite set indexed by treatment dose. We parameterize this unobservable set as a linear combination of a finite number of basis functions whose coefficients vary across units. This leads to new techniques for estimating the population average dose-response function (ADRF). Some techniques require a model for the treatment assignment given covariates, some require a model for predicting the potential outcomes from covariates, and some require both. We develop these techniques using a framework of estimating functions, compare them to existing methods for continuous treatments, and simulate their performance in a population where the ADRF is linear and the models for the treatment and/or outcomes may be misspecified. We also extend the comparisons to a data set of lottery winners in Massachusetts. Next, we describe the methods and functions in the R package causaldrf using data from the National Medical Expenditure Survey (NMES) and Infant Health and Development Program (IHDP) as examples. Additionally, we analyze the National Growth and Health Study (NGHS) data set and deal with the issue of missing data. Lastly, we discuss future research goals and possible extensions.
Resumo:
Introduction: Membranous glomerulonephritis is commonly described in systemic lupus erythematosus (SLE) and hypothyroidism. Clinical presentation: We report a case of a 40-year-old woman who presented with a membranous glomerulonephritis associated with SLE, rheumatoid arthritis and hypothyroidism due to Hashimoto’s thyroiditis. Conclusions: The simultaneous occurrence of these three diseases as possible causes of membranous glomerulonephritis is extremely exceptional.
Resumo:
In physics, one attempts to infer the rules governing a system given only the results of imperfect measurements. Hence, microscopic theories may be effectively indistinguishable experimentally. We develop an operationally motivated procedure to identify the corresponding equivalence classes of states, and argue that the renormalization group (RG) arises from the inherent ambiguities associated with the classes: one encounters flow parameters as, e.g., a regulator, a scale, or a measure of precision, which specify representatives in a given equivalence class. This provides a unifying framework and reveals the role played by information in renormalization. We validate this idea by showing that it justifies the use of low-momenta n-point functions as statistically relevant observables around a Gaussian hypothesis. These results enable the calculation of distinguishability in quantum field theory. Our methods also provide a way to extend renormalization techniques to effective models which are not based on the usual quantum-field formalism, and elucidates the relationships between various type of RG.
Resumo:
A radar scatterometer operates by transmitting a pulse of microwave energy toward the ocean's surface and measuring the normalized (per-unit-surface) radar backscatter coefficient (σ°). The primary application of scatterometry is the measurement of near-surface ocean winds. By combining σ° measurements from different azimuth angles, the 10 m vector wind can be determined through a Geophysical Model Function (GMF), which relates wind and backscatter. This paper proposes a mission concept for the measurement of both oceanic winds and surface currents, which makes full use of earlier C-band radar remote sensing experience. For the determination of ocean currents, in particular, the novel idea of using two chirps of opposite slope is introduced. The fundamental processing steps required to retrieve surface currents are given together with their associated accuracies. A detailed description of the mission proposal and comparisons between real and retrieved surface currents are presented. The proposed ocean Doppler scatterometer can be used to generate global surface ocean current maps with accuracies better than 0.2 m/s at a spatial resolution better than 25 km (i.e., 12.5 km spatial sampling) on a daily basis. These maps will allow gaining some insights on the upper ocean mesoscale dynamics. The work lies at a frontier, given that the present inability to measure ocean currents from space in a consistent and synoptic manner represents one of the greatest weaknesses in ocean remote sensing.
Resumo:
We report here the case of a young patient with a simultaneous isolated septal myocardial infarction (MI) and pulmonary embolism (PE). The aim was to describe a rare clinical entity and to explain why these two pathologies were present at the same time in a young patient. A review of literature was established. An interventional cardiologist, an interventional radiologist and a lung specialist were consulted. The diagnostic workup revealed only heterozygous Factor Leiden V mutation. This presentation was probably fortuitous, but worth reporting to our opinion.
Resumo:
Lignocellulosic biomass is the most abundant renewable source of energy that has been widely explored as second-generation biofuel feedstock. Despite more than four decades of research, the process of ethanol production from lignocellulosic (LC) biomass remains economically unfeasible. This is due to the high cost of enzymes, end-product inhibition of enzymes, and the need for cost-intensive inputs associated with a separate hydrolysis and fermentation (SHF) process. Thermotolerant yeast strains that can undergo fermentation at temperatures above 40°C are suitable alternatives for developing the simultaneous saccharification and fermentation (SSF) process to overcome the limitations of SHF. This review describes the various approaches to screen and develop thermotolerant yeasts via genetic and metabolic engineering. The advantages and limitations of SSF at high temperatures are also discussed. A critical insight into the effect of high temperatures on yeast morphology and physiology is also included. This can improve our understanding of the development of thermotolerant yeast amenable to the SSF process to make LC ethanol production commercially viable.
Resumo:
Statistical methodology is proposed for comparing molecular shapes. In order to account for the continuous nature of molecules, classical shape analysis methods are combined with techniques used for predicting random fields in spatial statistics. Applying a modification of Procrustes analysis, Bayesian inference is carried out using Markov chain Monte Carlo methods for the pairwise alignment of the resulting molecular fields. Superimposing entire fields rather than the configuration matrices of nuclear positions thereby solves the problem that there is usually no clear one--to--one correspondence between the atoms of the two molecules under consideration. Using a similar concept, we also propose an adaptation of the generalised Procrustes analysis algorithm for the simultaneous alignment of multiple molecular fields. The methodology is applied to a dataset of 31 steroid molecules.
Resumo:
Phylogenetic inference consist in the search of an evolutionary tree to explain the best way possible genealogical relationships of a set of species. Phylogenetic analysis has a large number of applications in areas such as biology, ecology, paleontology, etc. There are several criterias which has been defined in order to infer phylogenies, among which are the maximum parsimony and maximum likelihood. The first one tries to find the phylogenetic tree that minimizes the number of evolutionary steps needed to describe the evolutionary history among species, while the second tries to find the tree that has the highest probability of produce the observed data according to an evolutionary model. The search of a phylogenetic tree can be formulated as a multi-objective optimization problem, which aims to find trees which satisfy simultaneously (and as much as possible) both criteria of parsimony and likelihood. Due to the fact that these criteria are different there won't be a single optimal solution (a single tree), but a set of compromise solutions. The solutions of this set are called "Pareto Optimal". To find this solutions, evolutionary algorithms are being used with success nowadays.This algorithms are a family of techniques, which aren’t exact, inspired by the process of natural selection. They usually find great quality solutions in order to resolve convoluted optimization problems. The way this algorithms works is based on the handling of a set of trial solutions (trees in the phylogeny case) using operators, some of them exchanges information between solutions, simulating DNA crossing, and others apply aleatory modifications, simulating a mutation. The result of this algorithms is an approximation to the set of the “Pareto Optimal” which can be shown in a graph with in order that the expert in the problem (the biologist when we talk about inference) can choose the solution of the commitment which produces the higher interest. In the case of optimization multi-objective applied to phylogenetic inference, there is open source software tool, called MO-Phylogenetics, which is designed for the purpose of resolving inference problems with classic evolutionary algorithms and last generation algorithms. REFERENCES [1] C.A. Coello Coello, G.B. Lamont, D.A. van Veldhuizen. Evolutionary algorithms for solving multi-objective problems. Spring. Agosto 2007 [2] C. Zambrano-Vega, A.J. Nebro, J.F Aldana-Montes. MO-Phylogenetics: a phylogenetic inference software tool with multi-objective evolutionary metaheuristics. Methods in Ecology and Evolution. En prensa. Febrero 2016.
Resumo:
Organismal development, homeostasis, and pathology are rooted in inherently probabilistic events. From gene expression to cellular differentiation, rates and likelihoods shape the form and function of biology. Processes ranging from growth to cancer homeostasis to reprogramming of stem cells all require transitions between distinct phenotypic states, and these occur at defined rates. Therefore, measuring the fidelity and dynamics with which such transitions occur is central to understanding natural biological phenomena and is critical for therapeutic interventions.
While these processes may produce robust population-level behaviors, decisions are made by individual cells. In certain circumstances, these minuscule computing units effectively roll dice to determine their fate. And while the 'omics' era has provided vast amounts of data on what these populations are doing en masse, the behaviors of the underlying units of these processes get washed out in averages.
Therefore, in order to understand the behavior of a sample of cells, it is critical to reveal how its underlying components, or mixture of cells in distinct states, each contribute to the overall phenotype. As such, we must first define what states exist in the population, determine what controls the stability of these states, and measure in high dimensionality the dynamics with which these cells transition between states.
To address a specific example of this general problem, we investigate the heterogeneity and dynamics of mouse embryonic stem cells (mESCs). While a number of reports have identified particular genes in ES cells that switch between 'high' and 'low' metastable expression states in culture, it remains unclear how levels of many of these regulators combine to form states in transcriptional space. Using a method called single molecule mRNA fluorescent in situ hybridization (smFISH), we quantitatively measure and fit distributions of core pluripotency regulators in single cells, identifying a wide range of variabilities between genes, but each explained by a simple model of bursty transcription. From this data, we also observed that strongly bimodal genes appear to be co-expressed, effectively limiting the occupancy of transcriptional space to two primary states across genes studied here. However, these states also appear punctuated by the conditional expression of the most highly variable genes, potentially defining smaller substates of pluripotency.
Having defined the transcriptional states, we next asked what might control their stability or persistence. Surprisingly, we found that DNA methylation, a mark normally associated with irreversible developmental progression, was itself differentially regulated between these two primary states. Furthermore, both acute or chronic inhibition of DNA methyltransferase activity led to reduced heterogeneity among the population, suggesting that metastability can be modulated by this strong epigenetic mark.
Finally, because understanding the dynamics of state transitions is fundamental to a variety of biological problems, we sought to develop a high-throughput method for the identification of cellular trajectories without the need for cell-line engineering. We achieved this by combining cell-lineage information gathered from time-lapse microscopy with endpoint smFISH for measurements of final expression states. Applying a simple mathematical framework to these lineage-tree associated expression states enables the inference of dynamic transitions. We apply our novel approach in order to infer temporal sequences of events, quantitative switching rates, and network topology among a set of ESC states.
Taken together, we identify distinct expression states in ES cells, gain fundamental insight into how a strong epigenetic modifier enforces the stability of these states, and develop and apply a new method for the identification of cellular trajectories using scalable in situ readouts of cellular state.
Resumo:
The nutritional contribution of the dietary nitrogen, carbon and total dry matter supplied by fish meal (FM), soy protein isolate (SP) and corn gluten (CG) to the growth of Pacific white shrimp Litopenaeus vannamei was assessed by means of isotopic analyses. As SP and CG are ingredients derived from plants having different photosynthetic pathways which imprint specific carbon isotope values to plant tissues, their isotopic values were contrasting. FM is isotopically different to these plant meals with regards to both, carbon and nitrogen. Such natural isotopic differences were used to design experimental diets having contrasting isotopic signatures. Seven isoproteic (36% crude protein), isoenergetic (4.7 kcal g−1) diets were formulated; three diets consisted in isotopic controls manufactured with only one main ingredient supplying dietary nitrogen and carbon: 100% FM (diet 100F), 100% SP (diet 100S) and 100% CG (diet 100G). Four more diets were formulated with varying mixtures of these three ingredients, one included 33% of each ingredient on a dietary nitrogen basis (diet 33FSG) and the other three included a proportion 50:25:25 for each of the three ingredients (diets 50FSG, 50SGF and 50GFS). At the end of the bioassay there were no significant differences in growth rate in shrimps fed on the four mixed diets and diet 100F (k=0.215–0.224). Growth rates were significantly lower (k=0.163–0.201) in shrimps grown on diets containing only plant meals. Carbon and nitrogen stable isotope values (δ13C and δ15N) were measured in experimental diets and shrimp muscle tissue and results were incorporated into a three-source, two-isotope mixing model. The relative contributions of dietary nitrogen, carbon and total dry matter from FM, SP and CG to growth were statistically similar to the proportions established in most of the diets after correcting for the apparent digestibility coefficients of the ingredients. Dietary nitrogen available in diet 33FSG was incorporated in muscle tissue at proportions representing 24, 35 and 41% of the respective ingredients. Diet 50GSF contributed significantly higher amounts of dietary nitrogen from CG than from FM. When the level of dietary nitrogen derived from FM was increased in diet 50FSG, nutrient contributions were more comparable to the available dietary proportions as there was an incorporation of 44, 29 and 27% from FM, SP and CG, respectively. Nutritional contributions from SP were very consistent to the dietary proportions established in the experimental diets.