938 resultados para Finite-dimensional discrete phase spaces
Resumo:
Contraction, strike slip, and extension displacements along the Hikurangi margin northeast of the North Island of New Zealand coincide with large lateral gradients in material properties. We use a finite- difference code utilizing elastic and elastic-plastic rheologies to build large- scale, three-dimensional numerical models which investigate the influence of material properties on velocity partitioning within oblique subduction zones. Rheological variation in the oblique models is constrained by seismic velocity and attenuation information available for the Hikurangi margin. We compare the effect of weakly versus strongly coupled subduction interfaces on the development of extension and the partitioning of velocity components for orthogonal and oblique convergence and include the effect of ponded sediments beneath the Raukumara Peninsula. Extension and velocity partitioning occur if the subduction interface is weak, but neither develops if the subduction interface is strong. The simple mechanical model incorporating rheological variation based on seismic observations produces kinematics that closely match those published from the Hikurangi margin. These include extension within the Taupo Volcanic Zone, uplift over ponded sediments, and dextral contraction to the south.
Resumo:
A quantum simulator of U(1) lattice gauge theories can be implemented with superconducting circuits. This allows the investigation of confined and deconfined phases in quantum link models, and of valence bond solid and spin liquid phases in quantum dimer models. Fractionalized confining strings and the real-time dynamics of quantum phase transitions are accessible as well. Here we show how state-of-the-art superconducting technology allows us to simulate these phenomena in relatively small circuit lattices. By exploiting the strong non-linear couplings between quantized excitations emerging when superconducting qubits are coupled, we show how to engineer gauge invariant Hamiltonians, including ring-exchange and four-body Ising interactions. We demonstrate that, despite decoherence and disorder effects, minimal circuit instances allow us to investigate properties such as the dynamics of electric flux strings, signaling confinement in gauge invariant field theories. The experimental realization of these models in larger superconducting circuits could address open questions beyond current computational capability.
Resumo:
We argue that the effective theory describing the long-wavelength dynamics of black branes is the same effective theory that describes the dynamics of biophysical membranes. We improve the phase structure of higher-dimensional black rings by considering finite thickness corrections in this effective theory, showing a striking agreement between our analytical results and recent numerical constructions while simultaneously drawing a parallel between gravity and the effective theory of biophysical membranes.
Resumo:
Intestinal dendritic cells (DCs) are believed to sample and present commensal bacteria to the gut-associated immune system to maintain immune homeostasis. How antigen sampling pathways handle intestinal pathogens remains elusive. We present a murine colitogenic Salmonella infection model that is highly dependent on DCs. Conditional DC depletion experiments revealed that intestinal virulence of S. Typhimurium SL1344 DeltainvG mutant lacking a functional type 3 secretion system-1 (DeltainvG)critically required DCs for invasion across the epithelium. The DC-dependency was limited to the early phase of infection when bacteria colocalized with CD11c(+)CX3CR1(+) mucosal DCs. At later stages, the bacteria became associated with other (CD11c(-)CX3CR1(-)) lamina propria cells, DC depletion no longer attenuated the pathology, and a MyD88-dependent mucosal inflammation was initiated. Using bone marrow chimeric mice, we showed that the MyD88 signaling within hematopoietic cells, which are distinct from DCs, was required and sufficient for induction of the colitis. Moreover, MyD88-deficient DCs supported transepithelial uptake of the bacteria and the induction of MyD88-dependent colitis. These results establish that pathogen sampling by DCs is a discrete, and MyD88-independent, step during the initiation of a mucosal innate immune response to bacterial infection in vivo.
Resumo:
BACKGROUND AIMS The diverse phenotypic changes and clinical and economic disadvantages associated with the monolayer expansion of bone marrow-derived mesenchymal stromal cells (MSCs) have focused attention on the development of one-step intraoperative cells therapies and homing strategies. The mononuclear cell fraction of bone marrow, inclusive of discrete stem cell populations, is not well characterized, and we currently lack suitable cell culture systems in which to culture and investigate the behavior of these cells. METHODS Human bone marrow-derived mononuclear cells were cultured within fibrin for 2 weeks with or without fibroblast growth factor-2 supplementation. DNA content and cell viability of enzymatically retrieved cells were determined at days 7 and 14. Cell surface marker profiling and cell cycle analysis were performed by means of multi-color flow cytometry and a 5-ethynyl-2'-deoxyuridine incorporation assay, respectively. RESULTS Total mononuclear cell fractions, isolated from whole human bone marrow, was successfully cultured in fibrin gels for up to 14 days under static conditions. Discrete niche cell populations including MSCs, pericytes and hematopoietic stem cells were maintained in relative quiescence for 7 days in proportions similar to that in freshly isolated cells. Colony-forming unit efficiency of enzymatically retrieved MSCs was significantly higher at day 14 compared to day 0; and in accordance with previously published works, it was fibroblast growth factor-2-dependant. CONCLUSIONS Fibrin gels provide a simple, novel system in which to culture and study the complete fraction of bone marrow-derived mononuclear cells and may support the development of improved bone marrow cell-based therapies.
Resumo:
Purpose To investigate whether nonhemodynamic resonant saturation effects can be detected in patients with focal epilepsy by using a phase-cycled stimulus-induced rotary saturation (PC-SIRS) approach with spin-lock (SL) preparation and whether they colocalize with the seizure onset zone and surface interictal epileptiform discharges (IED). Materials and Methods The study was approved by the local ethics committee, and all subjects gave written informed consent. Eight patients with focal epilepsy undergoing presurgical surface and intracranial electroencephalography (EEG) underwent magnetic resonance (MR) imaging at 3 T with a whole-brain PC-SIRS imaging sequence with alternating SL-on and SL-off and two-dimensional echo-planar readout. The power of the SL radiofrequency pulse was set to 120 Hz to sensitize the sequence to high gamma oscillations present in epileptogenic tissue. Phase cycling was applied to capture distributed current orientations. Voxel-wise subtraction of SL-off from SL-on images enabled the separation of T2* effects from rotary saturation effects. The topography of PC-SIRS effects was compared with the seizure onset zone at intracranial EEG and with surface IED-related potentials. Bayesian statistics were used to test whether prior PC-SIRS information could improve IED source reconstruction. Results Nonhemodynamic resonant saturation effects ipsilateral to the seizure onset zone were detected in six of eight patients (concordance rate, 0.75; 95% confidence interval: 0.40, 0.94) by means of the PC-SIRS technique. They were concordant with IED surface negativity in seven of eight patients (0.88; 95% confidence interval: 0.51, 1.00). Including PC-SIRS as prior information improved the evidence of the standard EEG source models compared with the use of uninformed reconstructions (exceedance probability, 0.77 vs 0.12; Wilcoxon test of model evidence, P < .05). Nonhemodynamic resonant saturation effects resolved in patients with favorable postsurgical outcomes, but persisted in patients with postsurgical seizure recurrence. Conclusion Nonhemodynamic resonant saturation effects are detectable during interictal periods with the PC-SIRS approach in patients with epilepsy. The method may be useful for MR imaging-based detection of neuronal currents in a clinical environment. (©) RSNA, 2016 Online supplemental material is available for this article.
Resumo:
My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.
Resumo:
Mixture modeling is commonly used to model categorical latent variables that represent subpopulations in which population membership is unknown but can be inferred from the data. In relatively recent years, the potential of finite mixture models has been applied in time-to-event data. However, the commonly used survival mixture model assumes that the effects of the covariates involved in failure times differ across latent classes, but the covariate distribution is homogeneous. The aim of this dissertation is to develop a method to examine time-to-event data in the presence of unobserved heterogeneity under a framework of mixture modeling. A joint model is developed to incorporate the latent survival trajectory along with the observed information for the joint analysis of a time-to-event variable, its discrete and continuous covariates, and a latent class variable. It is assumed that the effects of covariates on survival times and the distribution of covariates vary across different latent classes. The unobservable survival trajectories are identified through estimating the probability that a subject belongs to a particular class based on observed information. We applied this method to a Hodgkin lymphoma study with long-term follow-up and observed four distinct latent classes in terms of long-term survival and distributions of prognostic factors. Our results from simulation studies and from the Hodgkin lymphoma study demonstrated the superiority of our joint model compared with the conventional survival model. This flexible inference method provides more accurate estimation and accommodates unobservable heterogeneity among individuals while taking involved interactions between covariates into consideration.^
Resumo:
The Phase I clinical trial is considered the "first in human" study in medical research to examine the toxicity of a new agent. It determines the maximum tolerable dose (MTD) of a new agent, i.e., the highest dose in which toxicity is still acceptable. Several phase I clinical trial designs have been proposed in the past 30 years. The well known standard method, so called the 3+3 design, is widely accepted by clinicians since it is the easiest to implement and it does not need a statistical calculation. Continual reassessment method (CRM), a design uses Bayesian method, has been rising in popularity in the last two decades. Several variants of the CRM design have also been suggested in numerous statistical literatures. Rolling six is a new method introduced in pediatric oncology in 2008, which claims to shorten the trial duration as compared to the 3+3 design. The goal of the present research was to simulate clinical trials and compare these phase I clinical trial designs. Patient population was created by discrete event simulation (DES) method. The characteristics of the patients were generated by several distributions with the parameters derived from a historical phase I clinical trial data review. Patients were then selected and enrolled in clinical trials, each of which uses the 3+3 design, the rolling six, or the CRM design. Five scenarios of dose-toxicity relationship were used to compare the performance of the phase I clinical trial designs. One thousand trials were simulated per phase I clinical trial design per dose-toxicity scenario. The results showed the rolling six design was not superior to the 3+3 design in terms of trial duration. The time to trial completion was comparable between the rolling six and the 3+3 design. However, they both shorten the duration as compared to the two CRM designs. Both CRMs were superior to the 3+3 design and the rolling six in accuracy of MTD estimation. The 3+3 design and rolling six tended to assign more patients to undesired lower dose levels. The toxicities were slightly greater in the CRMs.^
Resumo:
The sampling area was extended to the Western-South area off the Black Sea coast from Kaliakra cape toward the Bosforous. Samples were collected along four transects. The whole dataset is composed of 17 samples (from 10 stations) with data of mesozooplankton species composition abundance and biomass. Sampling for zooplankton was performed from bottom up to the surface at depths depending on water column stratification and the thermocline depth. These data are organized in the "Control of eutrophication, hazardous substances and related measures for rehabilitating the Black Sea ecosystem: Phase 2: Leg I: PIMS 3065". Data Report is not published. Zooplankton samples were collected with vertical closing Juday net,diameter - 36cm, mesh size 150 µm. Tows were performed from surface down to bottom meters depths in discrete layers. Samples were preserved by a 4% formaldehyde sea water buffered solution. Sampling volume was estimated by multiplying the mouth area with the wire length. Mesozooplankton abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Kremena Stefanova using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972). Taxon-specific abundance: The collected material was analysed using the method of Domov (1959). Samples were brought to volume of 25-30 ml depending upon zooplankton density and mixed intensively until all organisms were distributed randomly in the sample volume. After that 5 ml of sample was taken and poured in the counting chamber which is a rectangle form for taxomomic identification and count. Copepods and Cladoceras were identified and enumerated; the other mesozooplankters were identified and enumerated at higher taxonomic level (commonly named as mesozooplankton groups). Large (> 1 mm body length) and not abundant species were calculated in whole sample. Counting and measuring of organisms were made in the Dimov chamber under the stereomicroscope to the lowest taxon possible. Taxonomic identification was done at the Institute of Oceanology by Kremena Stefanova using the relevant taxonomic literature (Mordukhay-Boltovskoy, F.D. (Ed.). 1968, 1969,1972).
Resumo:
Let G be a reductive complex Lie group acting holomorphically on normal Stein spaces X and Y, which are locally G-biholomorphic over a common categorical quotient Q. When is there a global G-biholomorphism X → Y? If the actions of G on X and Y are what we, with justification, call generic, we prove that the obstruction to solving this local-to-global problem is topological and provide sufficient conditions for it to vanish. Our main tool is the equivariant version of Grauert's Oka principle due to Heinzner and Kutzschebauch. We prove that X and Y are G-biholomorphic if X is K-contractible, where K is a maximal compact subgroup of G, or if X and Y are smooth and there is a G-diffeomorphism ψ : X → Y over Q, which is holomorphic when restricted to each fibre of the quotient map X → Q. We prove a similar theorem when ψ is only a G-homeomorphism, but with an assumption about its action on G-finite functions. When G is abelian, we obtain stronger theorems. Our results can be interpreted as instances of the Oka principle for sections of the sheaf of G-biholomorphisms from X to Y over Q. This sheaf can be badly singular, even for a low-dimensional representation of SL2(ℂ). Our work is in part motivated by the linearisation problem for actions on ℂn. It follows from one of our main results that a holomorphic G-action on ℂn, which is locally G-biholomorphic over a common quotient to a generic linear action, is linearisable.
Resumo:
Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment.
Resumo:
Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment
Resumo:
Contaminated soil reuse was investigated, with higher profusion, throughout the early 90’s, coinciding with the 1991 Gulf War, when efforts to amend large crude oil releases began in geotechnical assessment of contaminated soils. Isolated works referring to geotechnical testing with hydrocarbon ground contaminants are described in the state-of-the-art, which have been extended to other type of contaminated soil references. Contaminated soils by light non-aquous phase liquids (LNAPL) bearing capacity reduction has been previously investigated from a forensic point of view. To date, all the research works have been published based on the assumption of constant contaminant saturation for the entire soil mass. In contrast, the actual LNAPLs distribution plumes exhibit complex flow patterns which are subject to physical and chemical changes with time and distance travelled from the release source. This aspect has been considered along the present text. A typical Madrid arkosic soil formation is commonly known as Miga sand. Geotechnical tests have been carried out, with Miga sand specimens, in incremental series of LNAPL concentrations in order to observe the soil engineering properties variation due to a contamination increase. Results are discussed in relation with previous studies and as a matter of fact, soil mechanics parameters change in the presence of LNAPL, showing different tendencies according to each test and depending on the LNAPL content, as well as to the specimen’s initially planned relative density, dense or loose. Geotechnical practical implications are also commented on and analyzed. Variation on geotechnical properties may occur only within the external contour of contamination distribution plume. This scope has motivated the author to develop a physical model based on transparent soil technology. The model aims to reproduce the distribution of LNAPL into the ground due to an accidental release from a storage facility. Preliminary results indicate that the model is a potentially complementary tool for hydrogeological applications, site-characterization and remediation treatment testing within the framework of soil pollution events. A description of the test setup of an innovative three dimensional physical model for the flow of two or more phases, in porous media, is presented herein, along with a summary of the advantages, limitations and future applications for modeling with transparent material. En los primeros años de la década de los años 90, del siglo pasado, coincidiendo con la Guerra del Golfo en 1991, se investigó intensamente sobre la reutilización de suelos afectados por grandes volúmenes de vertidos de crudo, fomentándose la evaluación geotécnica de los suelos contaminados. Se describen, en el estado del arte de esta tésis, una serie de trabajos aislados en relación con la caracterización geotécnica de suelos contaminados con hidrocarburos, descripción ampliada mediante referencias relacionadas con otros tipos de contaminación de suelos. Existen estudios previos de patología de cimentaciones que analizan la reducción de la capacidad portante de suelos contaminados por hidrocarburos líquidos ligeros en fase no acuosa (acrónimo en inglés: LNAPL de “Liquid Non-Aquous Phase Liquid”). A fecha de redacción de la tesis, todas las publicaciones anteriores estaban basadas en la consideración de una saturación del contaminante constante en toda la extensión del terreno de cimentación. La distribución real de las plumas de contaminante muestra, por el contrario, complejas trayectorias de flujo que están sujetas a cambios físico-químicos en función del tiempo y la distancia recorrida desde su origen de vertido. Éste aspecto ha sido considerado y tratado en el presente texto. La arena de Miga es una formación geológica típica de Madrid. En el ámbito de esta tesis se han desarrollado ensayos geotécnicos con series de muestras de arena de Miga contaminadas con distintas concentraciones de LNAPL con el objeto de estimar la variación de sus propiedades geotécnicas debido a un incremento de contaminación. Se ha realizado una evaluación de resultados de los ensayos en comparación con otros estudios previamente analizados, resultando que las propiedades mecánicas del suelo, efectivamente, varían en función del contenido de LNAPL y de la densidad relativa con la que se prepare la muestra, densa o floja. Se analizan y comentan las implicaciones de carácter práctico que supone la mencionada variación de propiedades geotécnicas. El autor ha desarrollado un modelo físico basado en la tecnología de suelos transparentes, considerando que las variaciones de propiedades geotécnicas únicamente deben producirse en el ámbito interior del contorno de la pluma contaminante. El objeto del modelo es el de reproducir la distribución de un LNAPL en un terreno dado, causada por el vertido accidental de una instalación de almecenamiento de combustible. Los resultados preliminares indican que el modelo podría emplearse como una herramienta complementaria para el estudio de eventos contaminantes, permitiendo el desarrollo de aplicaciones de carácter hidrogeológico, caracterización de suelos contaminados y experimentación de tratamientos de remediación. Como aportación de carácter innovadora, se presenta y describe un modelo físico tridimensional de flujo de dos o más fases a través de un medio poroso transparente, analizándose sus ventajas e inconvenientes así como sus limitaciones y futuras aplicaciones.
Resumo:
Aluminium is added to decrease matrix chromium losses on 430 stainless steel sintered on nitrogen atmosphere. Three different ways were used to add a 3% (in weight) aluminium: as elemental powder, as prealloyed powder, and as intermetallic Fe-AI compound. After die pressing at densities between 6.1-6.5 g/cm3, samples were sintered on vacuum and on N2-5%H2 atmosphere in a dilatometric furnace. Therefore, dimensional change was recorded during sintering. Weight gain was obtained after nitrogen sintering on all materials due to nitrides formation. Sample expansion was obtained on all nitrogen sintered steels with Al additions. Microstructure showed a dispersion of aluminium nitrides when pre-alloyed powders are used. On the contrary, aluminium nitride areas can be found when aluminium is added as elemental powders or as Fe-AI intermetallics. Also nitrogen atmosphere leads to austenite formation and hence, on cooling, dilatometric results showed a dimensional change at austenitic-ferritic phase transformation temperature.