957 resultados para Noncommutative phase space
Resumo:
Treating patients with combined agents is a growing trend in cancer clinical trials. Evaluating the synergism of multiple drugs is often the primary motivation for such drug-combination studies. Focusing on the drug combination study in the early phase clinical trials, our research is composed of three parts: (1) We conduct a comprehensive comparison of four dose-finding designs in the two-dimensional toxicity probability space and propose using the Bayesian model averaging method to overcome the arbitrariness of the model specification and enhance the robustness of the design; (2) Motivated by a recent drug-combination trial at MD Anderson Cancer Center with a continuous-dose standard of care agent and a discrete-dose investigational agent, we propose a two-stage Bayesian adaptive dose-finding design based on an extended continual reassessment method; (3) By combining phase I and phase II clinical trials, we propose an extension of a single agent dose-finding design. We model the time-to-event toxicity and efficacy to direct dose finding in two-dimensional drug-combination studies. We conduct extensive simulation studies to examine the operating characteristics of the aforementioned designs and demonstrate the designs' good performances in various practical scenarios.^
Resumo:
My dissertation focuses mainly on Bayesian adaptive designs for phase I and phase II clinical trials. It includes three specific topics: (1) proposing a novel two-dimensional dose-finding algorithm for biological agents, (2) developing Bayesian adaptive screening designs to provide more efficient and ethical clinical trials, and (3) incorporating missing late-onset responses to make an early stopping decision. Treating patients with novel biological agents is becoming a leading trend in oncology. Unlike cytotoxic agents, for which toxicity and efficacy monotonically increase with dose, biological agents may exhibit non-monotonic patterns in their dose-response relationships. Using a trial with two biological agents as an example, we propose a phase I/II trial design to identify the biologically optimal dose combination (BODC), which is defined as the dose combination of the two agents with the highest efficacy and tolerable toxicity. A change-point model is used to reflect the fact that the dose-toxicity surface of the combinational agents may plateau at higher dose levels, and a flexible logistic model is proposed to accommodate the possible non-monotonic pattern for the dose-efficacy relationship. During the trial, we continuously update the posterior estimates of toxicity and efficacy and assign patients to the most appropriate dose combination. We propose a novel dose-finding algorithm to encourage sufficient exploration of untried dose combinations in the two-dimensional space. Extensive simulation studies show that the proposed design has desirable operating characteristics in identifying the BODC under various patterns of dose-toxicity and dose-efficacy relationships. Trials of combination therapies for the treatment of cancer are playing an increasingly important role in the battle against this disease. To more efficiently handle the large number of combination therapies that must be tested, we propose a novel Bayesian phase II adaptive screening design to simultaneously select among possible treatment combinations involving multiple agents. Our design is based on formulating the selection procedure as a Bayesian hypothesis testing problem in which the superiority of each treatment combination is equated to a single hypothesis. During the trial conduct, we use the current values of the posterior probabilities of all hypotheses to adaptively allocate patients to treatment combinations. Simulation studies show that the proposed design substantially outperforms the conventional multi-arm balanced factorial trial design. The proposed design yields a significantly higher probability for selecting the best treatment while at the same time allocating substantially more patients to efficacious treatments. The proposed design is most appropriate for the trials combining multiple agents and screening out the efficacious combination to be further investigated. The proposed Bayesian adaptive phase II screening design substantially outperformed the conventional complete factorial design. Our design allocates more patients to better treatments while at the same time providing higher power to identify the best treatment at the end of the trial. Phase II trial studies usually are single-arm trials which are conducted to test the efficacy of experimental agents and decide whether agents are promising to be sent to phase III trials. Interim monitoring is employed to stop the trial early for futility to avoid assigning unacceptable number of patients to inferior treatments. We propose a Bayesian single-arm phase II design with continuous monitoring for estimating the response rate of the experimental drug. To address the issue of late-onset responses, we use a piece-wise exponential model to estimate the hazard function of time to response data and handle the missing responses using the multiple imputation approach. We evaluate the operating characteristics of the proposed method through extensive simulation studies. We show that the proposed method reduces the total length of the trial duration and yields desirable operating characteristics for different physician-specified lower bounds of response rate with different true response rates.
Resumo:
The Zambezi deep-sea fan, the largest of its kind along the east African continental margin, is poorly studied to date, despite its potential to record marine and terrestrial climate signals in the southwest Indian Ocean. Therefore, gravity core GeoB 9309-1, retrieved from 1219 m water depth, was investigated for various geophysical (magnetic susceptibility, porosity, colour reflectance) and geochemical (pore water and sediment geochemistry, Fe and P speciation) properties. Onboard and onshore data documented a sulphate/methane transition (SMT) zone at ~ 450-530 cm sediment depth, where the simultaneous consumption of pore water sulphate and methane liberates hydrogen sulphide and bi-carbonate into the pore space. This leads to characteristic changes in the sediment and pore water chemistry, as the reduction of primary Fe (oxyhydr)oxides, the precipitation of Fe sulphides, and the mobilization of Fe (oxyhydr)oxide-bound P. These chemical processes also lead to a marked decrease in magnetic susceptibility. Below the SMT, we find a reduction of porosity, possibly due to pore space cementation by authigenic minerals. Formation of the observed geochemical, magnetic and mineralogical patterns requires a fixation of the SMT at this distinct sediment depth for a considerable time-which we calculated to be ~ 10 000 years assuming steady-state conditions-following a period of rapid upward migration towards this interval. We postulate that the worldwide sea-level rise at the last glacial/interglacial transition (~ 10 000 years B.P.) most probably caused the fixation of the SMT at its present position, through drastically reduced sediment delivery to the deep-sea fan. In addition, we report an internal redistribution of P occurring around the SMT, closely linked to the (de)coupling of sedimentary Fe and P, and leaving a characteristic pattern in the solid P record. By phosphate re-adsorption onto Fe (oxyhydr)oxides above, and formation of authigenic P minerals (e.g. vivianite) below the SMT, deep-sea fan deposits may potentially act as long-term sinks for P.
Resumo:
This PhD work is focused on liquid crystal based tunable phase devices with special emphasis on their design and manufacturing. In the course of the work a number of new manufacturing technologies have been implemented in the UPM clean room facilities, leading to an important improvement in the range of devices being manufactured in the laboratory. Furthermore, a number of novel phase devices have been developed, all of them including novel electrodes, and/or alignment layers. The most important manufacturing progress has been the introduction of reactive ion etching as a tool for achieving high resolution photolithography on indium-tin-oxide (ITO) coated glass and quartz substrates. Another important manufacturing result is the successful elaboration of a binding protocol of anisotropic conduction adhesives. These have been employed in high density interconnections between ITO-glass and flexible printed circuits. Regarding material characterization, the comparative study of nonstoichiometric silicon oxide (SiOx) and silica (SiO2) inorganic alignment layers, as well as the relationship between surface layer deposition, layer morphology and liquid crystal electrooptical response must be highlighted, together with the characterization of the degradation of liquid crystal devices in simulated space mission environment. A wide variety of phase devices have been developed, with special emphasis on beam steerers. One of these was developed within the framework of an ESA project, and consisted of a high density reconfigurable 1D blaze grating, with a spatial separation of the controlling microelectronics and the active, radiation exposed, area. The developed devices confirmed the assumption that liquid crystal devices with such a separation of components, are radiation hard, and can be designed to be both vibration and temperature sturdy. In parallel to the above, an evenly variable analog beam steering device was designed, manufactured and characterized, providing a narrow cone diffraction free beam steering. This steering device is characterized by a very limited number of electrodes necessary for the redirection of a light beam. As few as 4 different voltage levels were needed in order to redirect a light beam. Finally at the Wojskowa Akademia Techniczna (Military University of Technology) in Warsaw, Poland, a wedged analog tunable beam steering device was designed, manufactured and characterized. This beam steerer, like the former one, was designed to resist the harsh conditions both in space and in the context of the shuttle launch. Apart from the beam steering devices, reconfigurable vortices and modal lens devices have been manufactured and characterized. In summary, during this work a large number of liquid crystal devices and liquid crystal device manufacturing technologies have been developed. Besides their relevance in scientific publications and technical achievements, most of these new devices have demonstrated their usefulness in the actual work of the research group where this PhD has been completed. El presente trabajo de Tesis se ha centrado en el diseño, fabricación y caracterización de nuevos dispositivos de fase basados en cristal líquido. Actualmente se están desarrollando dispositivos basados en cristal líquido para aplicaciones diferentes a su uso habitual como displays. Poseen la ventaja de que los dispositivos pueden ser controlados por bajas tensiones y no necesitan elementos mecánicos para su funcionamiento. La fabricación de todos los dispositivos del presente trabajo se ha realizado en la cámara limpia del grupo. La cámara limpia ha sido diseñada por el grupo de investigación, es de dimensiones reducidas pero muy versátil. Está dividida en distintas áreas de trabajo dependiendo del tipo de proceso que se lleva a cabo. La cámara limpia está completamente cubierta de un material libre de polvo. Todas las entradas de suministro de gas y agua están selladas. El aire filtrado es constantemente bombeado dentro de la zona limpia, a fin de crear una sobrepresión evitando así la entrada de aire sin filtrar. Las personas que trabajan en esta zona siempre deben de estar protegidas con un traje especial. Se utilizan trajes especiales que constan de: mono, máscara, guantes de látex, gorro, patucos y gafas de protección UV, cuando sea necesario. Para introducir material dentro de la cámara limpia se debe limpiar con alcohol y paños especiales y posteriormente secarlos con nitrógeno a presión. La fabricación debe seguir estrictamente unos pasos determinados, que pueden cambiar dependiendo de los requerimientos de cada dispositivo. Por ello, la fabricación de dispositivos requiere la formulación de varios protocolos de fabricación. Estos protocolos deben ser estrictamente respetados a fin de obtener repetitividad en los experimentos, lo que lleva siempre asociado un proceso de fabricación fiable. Una célula de cristal líquido está compuesta (de forma general) por dos vidrios ensamblados (sándwich) y colocados a una distancia determinada. Los vidrios se han sometido a una serie de procesos para acondicionar las superficies internas. La célula se llena con cristal líquido. De forma resumida, el proceso de fabricación general es el siguiente: inicialmente, se cortan los vidrios (cuya cara interna es conductora) y se limpian. Después se imprimen las pistas sobre el vidrio formando los píxeles. Estas pistas conductoras provienen del vidrio con la capa conductora de ITO (óxido de indio y estaño). Esto se hace a través de un proceso de fotolitografía con una resina fotosensible, y un desarrollo y ataque posterior del ITO sin protección. Más tarde, las caras internas de los vidrios se acondicionan depositando una capa, que puede ser orgánica o inorgánica (un polímero o un óxido). Esta etapa es crucial para el funcionamiento del dispositivo: induce la orientación de las moléculas de cristal líquido. Una vez que las superficies están acondicionadas, se depositan espaciadores en las mismas: son pequeñas esferas o cilindros de tamaño calibrado (pocos micrómetros) para garantizar un espesor homogéneo del dispositivo. Después en uno de los sustratos se deposita un adhesivo (gasket). A continuación, los sustratos se ensamblan teniendo en cuenta que el gasket debe dejar una boca libre para que el cristal líquido se introduzca posteriormente dentro de la célula. El llenado de la célula se realiza en una cámara de vacío y después la boca se sella. Por último, la conexión de los cables a la célula y el montaje de los polarizadores se realizan fuera de la sala limpia (Figura 1). Dependiendo de la aplicación, el cristal líquido empleado y los demás componentes de la célula tendrán unas características particulares. Para el diseño de los dispositivos de este trabajo se ha realizado un estudio de superficies inorgánicas de alineamiento del cristal líquido, que será de gran importancia para la preparación de los dispositivos de fase, dependiendo de las condiciones ambientales en las que vayan a trabajar. Los materiales inorgánicos que se han estudiado han sido en este caso SiOx y SiO2. El estudio ha comprendido tanto los factores de preparación influyentes en el alineamiento, el comportamiento del cristal líquido al variar estos factores y un estudio de la morfología de las superficies obtenidas.
Resumo:
In large antenna arrays with a large number of antenna elements, the required number of measurements for the characterization of the antenna array is very demanding in cost and time. This letter presents a new offline calibration process for active antenna arrays that reduces the number of measurements by subarray-level characterization. This letter embraces measurements, characterization, and calibration as a global procedure assessing about the most adequate calibration technique and computing of compensation matrices. The procedure has been fully validated with measurements of a 45-element triangular panel array designed for Low Earth Orbit (LEO) satellite tracking that compensates the degradation due to gain and phase imbalances and mutual coupling.
Resumo:
We present a combinatorial decision problem, inspired by the celebrated quiz show called Countdown, that involves the computation of a given target number T from a set of k randomly chosen integers along with a set of arithmetic operations. We find that the probability of winning the game evidences a threshold phenomenon that can be understood in the terms of an algorithmic phase transition as a function of the set size k. Numerical simulations show that such probability sharply transitions from zero to one at some critical value of the control parameter, hence separating the algorithm's parameter space in different phases. We also find that the system is maximally efficient close to the critical point. We derive analytical expressions that match the numerical results for finite size and permit us to extrapolate the behavior in the thermodynamic limit.
Resumo:
Air Mines The sky over the city's port was the color of a faulty screen, only partly lit up. As the silhouette of nearby buildings became darker, but more clearly visible against the fading blur-filter of a background, the realization came about how persistent a change had been taking place. Slowly, old wooden water reservoirs and rattling HVAC systems stopped being the only inhabitants of roofs. Slightly trembling, milkish jellyfish-translucent air volumes had joined the show in multiples. A few years ago artists and architects seized upon the death of buildings as their life-saving media. Equipped with constructive atlases and instruments they started disemboweling their subjects, poking about their systems, dumping out on the street the battered ugliness of their embarrassing bits and pieces, so rightly hidden by facades and height from everyday view. But, would you believe it? Even ?old ladies?, investment bankers or small children failed to get upset. Of course, old ladies are not what they used to be. It was old ladies themselves that made it happen after years of fights with the town hall, imaginative proposals and factual arguments. An industry with little financial gains but lots of welcome externalities was not, in fact, the ground for investment bankers. But they too had to admit that having otherwise stately buildings make fine particulate pencils with their facades was not the worse that could happen. Yes, making soot pencils had been found an interesting and visible end product of the endeavor, a sort of mining the air for vintage writing tools one can actually touch. The new view from the street did not seem as solid or dignified as that of old, and they hated that the market for Fine Particulates Extraction (FPE, read efpee) had to be applied on a matrix of blocks and streets that prevented undue concentration of the best or worse solutions. It had to be an evenly distributed city policy in order for the city to apply for cleaning casino money. Once the first prototypes had been deployed in buildings siding Garden Avenue or Bulwark Street even fast movers appreciated the sidekick of flower and plant smell dripping down the Urban Space Stations (USS, read use; USSs, read uses) as air and walls cooled off for a few hours after sunset. Enough. It was all nice to remember, but it was now time to go up and start the lightweight afternoon maintenance of their USS. Coop discussions had taken place all through the planning and continued through the construction phase as to how maintenance was going to be organized. Fasters had voted for a pro, pay a small amount and let them use it for rent and produce. In the end some neighbors decided they were slow enough to take care and it was now the turn. Regret came periodically, sometimes a week before, and lasted until work actually started. But lately it had been replaced by anxiety when it needed to be passed over to the next caretaker. It did not look their shift was good enough and couldn?t wait to fix it. Today small preparations needed to be made for a class visit next day from a nearby cook school. They were frequenters. It had not been easy, but it shouldn?t have been that hard. In the end, even the easiest things are hard if they involve a city, buildings and neighbors. On the face of the data, the technicalities and the way final designs had been worked out for adaptation to the different settings, the decision of where to go was self evident, but organization issues and the ever-growing politics of taste in a city of already-gentrified-rodents almost put the project in the frozen orbit of timeless beautiful future possibilities. This is how it was. A series of designs by XClinic and OSS had made it possible to adapt to different building structures, leave in most cases the roof untouched and adapted a new technology of flexing fiberglass tubes that dissipated wind pressure in smooth bending.......
Resumo:
Operational Modal Analysis consists on estimate the modal parameters of a structure (natural frequencies, damping ratios and modal vectors) from output-only vibration measurements. The modal vectors can be only estimated where a sensor is placed, so when the number of available sensors is lower than the number of tested points, it is usual to perform several tests changing the position of the sensors from one test to the following (multiple setups of sensors): some sensors stay at the same position from setup to setup, and the other sensors change the position until all the tested points are covered. The permanent sensors are then used to merge the mode shape estimated at each setup (or partial modal vectors) into global modal vectors. Traditionally, the partial modal vectors are estimated independently setup by setup, and the global modal vectors are obtained in a postprocess phase. In this work we present two state space models that can be used to process all the recorded setups at the same time, and we also present how these models can be estimated using the maximum likelihood method. The result is that the global mode shape of each mode is obtained automatically, and subsequently, a single value for the natural frequency and damping ratio of the mode is computed. Finally, both models are compared using real measured data.
Resumo:
The visual system utilizes binocular disparity to discriminate the relative depth of objects in space. Since the striate cortex is the first site along the central visual pathways at which signals from the left and right eyes converge onto a single neuron, encoding of binocular disparity is thought to begin in this region. There are two possible mechanisms for encoding binocular disparity through simple cells in the striate cortex: a difference in receptive field (RF) position between the two eyes (RF position disparity) and a difference in RF profile between the two eyes (RF phase disparity). Although there have been studies supporting each of the two encoding mechanisms, both mechanisms have not been examined in a single study. Therefore, the relative roles of the two mechanisms have not been determined. To address this issue, we have mapped left and right eye RFs of simple cells in the cat’s striate cortex using binary m-sequence noise, and then we have estimated RF position and phase disparities. We find that RF position disparities are generally limited to small values that are not sufficient to encode large binocular disparities. In contrast, RF phase disparities cover a wide range of binocular disparities and exhibit dependencies on orientation and spatial frequency in a manner expected for a mechanism that encodes binocular disparity. These results indicate that binocular disparity is mainly encoded through RF phase disparity. However, RF position disparity may play a significant role for cells with high spatial frequency selectivity, which are constrained to small RF phase disparities.
Resumo:
Research analysis of electrocardiograms (ECG) today is carried out mostly using time depending signals of different leads shown in the graphs. Definition of ECG parameters is performed by qualified personnel, and requiring particular skills. To support decoding the cardiac depolarization phase of ECG there are methods to analyze space-time convolution charts in three dimensions where the heartbeat is described by the trajectory of its electrical vector. Based on this, it can be assumed that all available options of the classical ECG analysis of this time segment can be obtained using this technique. Investigated ECG visualization techniques in three dimensions combined with quantitative methods giving additional features of cardiac depolarization and allow a better exploitation of the information content of the given ECG signals.
Resumo:
Binocular vision is traditionally treated as two processes: the fusion of similar images, and the interocular suppression of dissimilar images (e.g. binocular rivalry). Recent work has demonstrated that interocular suppression is phase-insensitive, whereas binocular summation occurs only when stimuli are in phase. But how do these processes affect our perception of binocular contrast? We measured perceived contrast using a matching paradigm for a wide range of interocular phase offsets (0–180°) and matching contrasts (2–32%). Our results revealed a complex interaction between contrast and interocular phase. At low contrasts, perceived contrast reduced monotonically with increasing phase offset, by up to a factor of 1.6. At higher contrasts the pattern was non-monotonic: perceived contrast was veridical for in-phase and antiphase conditions, and monocular presentation, but increased a little at intermediate phase angles. These findings challenge a recent model in which contrast perception is phase-invariant. The results were predicted by a binocular contrast gain control model. The model involves monocular gain controls with interocular suppression from positive and negative phase channels, followed by summation across eyes and then across space. Importantly, this model—applied to conditions with vertical disparity—has only a single (zero) disparity channel and embodies both fusion and suppression processes within a single framework.
Resumo:
We consider turbulence within the Gross-Pitaevsky model and look into the creation of a coherent condensate via an inverse cascade originating at small scales. The growth of the condensate leads to a spontaneous breakdown of statistical symmetries of overcondensate fluctuations: First, isotropy is broken, then a series of phase transitions marks the changing symmetry from twofold to threefold to fourfold. We describe respective anisotropic flux flows in the k space. At the highest level reached, we observe a short-range positional and long-range orientational order (as in a hexatic phase). In other words, the more one pumps the system, the more ordered the system becomes. The phase transitions happen when the system is pumped by an instability term and does not occur when pumped by a random force. We thus demonstrate nonuniversality of an inverse-cascade turbulence with respect to the nature of small-scale forcing.
Resumo:
To represent the local orientation and energy of a 1-D image signal, many models of early visual processing employ bandpass quadrature filters, formed by combining the original signal with its Hilbert transform. However, representations capable of estimating an image signal's 2-D phase have been largely ignored. Here, we consider 2-D phase representations using a method based upon the Riesz transform. For spatial images there exist two Riesz transformed signals and one original signal from which orientation, phase and energy may be represented as a vector in 3-D signal space. We show that these image properties may be represented by a Singular Value Decomposition (SVD) of the higher-order derivatives of the original and the Riesz transformed signals. We further show that the expected responses of even and odd symmetric filters from the Riesz transform may be represented by a single signal autocorrelation function, which is beneficial in simplifying Bayesian computations for spatial orientation. Importantly, the Riesz transform allows one to weight linearly across orientation using both symmetric and asymmetric filters to account for some perceptual phase distortions observed in image signals - notably one's perception of edge structure within plaid patterns whose component gratings are either equal or unequal in contrast. Finally, exploiting the benefits that arise from the Riesz definition of local energy as a scalar quantity, we demonstrate the utility of Riesz signal representations in estimating the spatial orientation of second-order image signals. We conclude that the Riesz transform may be employed as a general tool for 2-D visual pattern recognition by its virtue of representing phase, orientation and energy as orthogonal signal quantities.
Resumo:
Results of a pioneering study are presented in which for the first time, crystallization, phase separation and Marangoni instabilities occurring during the spin-coating of polymer blends are directly visualized, in real-space and real-time. The results provide exciting new insights into the process of self-assembly, taking place during spin-coating, paving the way for the rational design of processing conditions, to allow desired morphologies to be obtained. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
Visual perception begins by dissecting the retinal image into millions of small patches for local analyses by local receptive fields. However, image structures extend well beyond these receptive fields and so further processes must be involved in sewing the image fragments back together to derive representations of higher order (more global) structures. To investigate the integration process, we also need to understand the opposite process of suppression. To investigate both processes together, we measured triplets of dipper functions for targets and pedestals involving interdigitated stimulus pairs (A, B). Previous work has shown that summation and suppression operate over the full contrast range for the domains of ocularity and space. Here, we extend that work to include orientation and time domains. Temporal stimuli were 15-Hz counter-phase sine-wave gratings, where A and B were the positive and negative phases of the oscillation, respectively. For orientation, we used orthogonally oriented contrast patches (A, B) whose sum was an isotropic difference of Gaussians. Results from all four domains could be understood within a common framework in which summation operates separately within the numerator and denominator of a contrast gain control equation. This simple arrangement of summation and counter-suppression achieves integration of various stimulus attributes without distorting the underlying contrast code.