927 resultados para duration, functional delta method, gamma kernel, hazard rate.


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Measurements of neutron and gamma dose rates in mixed radiation fields, and gamma dose rates from calibrated gamma sources, were performed using a liquid scintillation counter NE213 with a pulse shape discrimination technique based on the charge comparison method. A computer program was used to analyse the experimental data. The radiation field was obtained from a 241Am-9Be source. There was general agreement between measured and calculated neutron and gamma dose rates in the mixed radiation field, but some disagreement in the measurements of gamma dose rates for gamma sources, due to the dark current of the photomultiplier and the effect of the perturbation of the radiation field by the detector. An optical fibre bundle was used to couple an NE213 scintillator to a photomultiplier, in an attempt to minimise these effects. This produced an improvement in the results for gamma sources. However, the optically coupled detector system could not be used for neutron and gamma dose rate measurements in mixed radiation fields. The pulse shape discrimination system became ineffective as a consequence of the slower time response of the detector system.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The need for low bit-rate speech coding is the result of growing demand on the available radio bandwidth for mobile communications both for military purposes and for the public sector. To meet this growing demand it is required that the available bandwidth be utilized in the most economic way to accommodate more services. Two low bit-rate speech coders have been built and tested in this project. The two coders combine predictive coding with delta modulation, a property which enables them to achieve simultaneously the low bit-rate and good speech quality requirements. To enhance their efficiency, the predictor coefficients and the quantizer step size are updated periodically in each coder. This enables the coders to keep up with changes in the characteristics of the speech signal with time and with changes in the dynamic range of the speech waveform. However, the two coders differ in the method of updating their predictor coefficients. One updates the coefficients once every one hundred sampling periods and extracts the coefficients from input speech samples. This is known in this project as the Forward Adaptive Coder. Since the coefficients are extracted from input speech samples, these must be transmitted to the receiver to reconstruct the transmitted speech sample, thus adding to the transmission bit rate. The other updates its coefficients every sampling period, based on information of output data. This coder is known as the Backward Adaptive Coder. Results of subjective tests showed both coders to be reasonably robust to quantization noise. Both were graded quite good, with the Forward Adaptive performing slightly better, but with a slightly higher transmission bit rate for the same speech quality, than its Backward counterpart. The coders yielded acceptable speech quality of 9.6kbps for the Forward Adaptive and 8kbps for the Backward Adaptive.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Altered state theories of hypnosis posit that a qualitatively distinct state of mental processing, which emerges in those with high hypnotic susceptibility following a hypnotic induction, enables the generation of anomalous experiences in response to specific hypnotic suggestions. If so then such a state should be observable as a discrete pattern of changes to functional connectivity (shared information) between brain regions following a hypnotic induction in high but not low hypnotically susceptible participants. Twenty-eight channel EEG was recorded from 12 high susceptible (highs) and 11 low susceptible (lows) participants with their eyes closed prior to and following a standard hypnotic induction. The EEG was used to provide a measure of functional connectivity using both coherence (COH) and the imaginary component of coherence (iCOH), which is insensitive to the effects of volume conduction. COH and iCOH were calculated between all electrode pairs for the frequency bands: delta (0.1-3.9 Hz), theta (4-7.9 Hz) alpha (8-12.9 Hz), beta1 (13-19.9 Hz), beta2 (20-29.9 Hz) and gamma (30-45 Hz). The results showed that there was an increase in theta iCOH from the pre-hypnosis to hypnosis condition in highs but not lows with a large proportion of significant links being focused on a central-parietal hub. There was also a decrease in beta1 iCOH from the pre-hypnosis to hypnosis condition with a focus on a fronto-central and an occipital hub that was greater in high compared to low susceptibles. There were no significant differences for COH or for spectral band amplitude in any frequency band. The results are interpreted as indicating that the hypnotic induction elicited a qualitative change in the organization of specific control systems within the brain for high as compared to low susceptible participants. This change in the functional organization of neural networks is a plausible indicator of the much theorized "hypnotic-state". © 2014 Jamieson and Burgess.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Esophageal intubation is a widely utilized technique for a diverse array of physiological studies, activating a complex physiological response mediated, in part, by the autonomic nervous system (ANS). In order to determine the optimal time period after intubation when physiological observations should be recorded, it is important to know the duration of, and factors that influence, this ANS response, in both health and disease. Methods: Fifty healthy subjects (27 males, median age 31.9 years, range 20-53 years) and 20 patients with Rome III defined functional chest pain (nine male, median age of 38.7 years, range 28-59 years) had personality traits and anxiety measured. Subjects had heart rate (HR), blood pressure (BP), sympathetic (cardiac sympathetic index, CSI), and parasympathetic nervous system (cardiac vagal tone, CVT) parameters measured at baseline and in response to per nasum intubation with an esophageal catheter. CSI/CVT recovery was measured following esophageal intubation. Key Results: In all subjects, esophageal intubation caused an elevation in HR, BP, CSI, and skin conductance response (SCR; all p < 0.0001) but concomitant CVT and cardiac sensitivity to the baroreflex (CSB) withdrawal (all p < 0.04). Multiple linear regression analysis demonstrated that longer CVT recovery times were independently associated with higher neuroticism (p < 0.001). Patients had prolonged CSI and CVT recovery times in comparison to healthy subjects (112.5 s vs 46.5 s, p = 0.0001 and 549 s vs 223.5 s, p = 0.0001, respectively). Conclusions & Inferences: Esophageal intubation activates a flight/flight ANS response. Future studies should allow for at least 10 min of recovery time. Consideration should be given to psychological traits and disease status as these can influence recovery. The psychological trait of neuroticism retards autonomic recovery following esophageal intubation in health and functional chest pain. © 2013 John Wiley & Sons Ltd.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The aim of this paper is to build the stated preference method into the social discount rate methodology. The first part of the paper presents the results of a survey about stated time preferences through pair-choice decision situations for various topics and time horizons. It is assumed that stated time preferences differ from calculated time preferences and that the extent of stated rates depends on the time period, and on how much respondents are financially and emotionally involved in the transactions. A significant question remains: how can the gap between the calculation and the results of surveys be resolved, and how can the real time preferences of individuals be interpreted using a social time preference rate. The second part of the paper estimates the social time preference rate for Hungary using the results of the survey, while paying special attention to the pure time preference component. The results suggest that the current method of calculation of the pure time preference rate does not reflect the real attitudes of individuals towards future generations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The tropical echinoid Echinometra viridis was reared in controlled laboratory experiments at temperatures of approximately 20°C and 30°C to mimic winter and summer temperatures and at carbon dioxide (CO2) partial pressures of approximately 487 ppm-v and 805 ppm-v to simulate current and predicted-end-of-century levels. Spine material produced during the experimental period and dissolved inorganic carbon (DIC) of the corresponding culture solutions were then analyzed for stable oxygen (delta 18Oe, delta 18ODIC) and carbon (The tropical echinoid Echinometra viridis was reared in controlled laboratory experiments at temperatures of approximately 20°C and 30°C to mimic winter and summer temperatures and at carbon dioxide (CO2) partial pressures of approximately 487 ppm-v and 805 ppm-v to simulate current and predicted-end-of-century levels. Spine material produced during the experimental period and dissolved inorganic carbon (DIC) of the corresponding culture solutions were then analyzed for stable oxygen (delta18Oe, delta18ODIC) and carbon (delta13Ce, delta13CDIC) isotopic composition. Fractionation of oxygen stable isotopes between the echinoid spines and DIC of their corresponding culture solutions (delta18O = delta18Oe - delta18ODIC) was significantly inversely correlated with seawater temperature but not significantly correlated with atmospheric pCO2. Fractionation of carbon stable isotopes between the echinoid spines and DIC of their corresponding culture solutions (Delta delta13C = delta13Ce - delta13CDIC) was significantly positively correlated with pCO2 and significantly inversely correlated with temperature, with pCO2 functioning as the primary factor and temperature moderating the pCO2-delta13C relationship. Echinoid calcification rate was significantly inversely correlated with both delta18O and delta13C, both within treatments (i.e., pCO2 and temperature fixed) and across treatments (i.e., with effects of pCO2 and temperature controlled for through ANOVA). Therefore, calcification rate and potentially the rate of co-occurring dissolution appear to be important drivers of the kinetic isotope effects observed in the echinoid spines. Study results suggest that echinoid delta18O monitors seawater temperature, but not atmospheric pCO2, and that echinoid delta13C monitors atmospheric pCO2, with temperature moderating this relationship. These findings, coupled with echinoids' long and generally high-quality fossil record, supports prior assertions that fossil echinoid delta18O is a viable archive of paleo-seawater temperature throughout Phanerozoic time, and that delta13C merits further investigation as a potential proxy of paleo-atmospheric pCO2. However, the apparent impact of calcification rate on echinoid delta18O and delta13C suggests that paleoceanographic reconstructions derived from these proxies in fossil echinoids could be improved by incorporating the effects of growth rate.13Ce, delta13CDIC) isotopic composition. Fractionation of oxygen stable isotopes between the echinoid spines and DIC of their corresponding culture solutions (delta18O = delta18Oe - delta18ODIC) was significantly inversely correlated with seawater temperature but not significantly correlated with atmospheric pCO2. Fractionation of carbon stable isotopes between the echinoid spines and DIC of their corresponding culture solutions (delta13C = delta13Ce - delta13CDIC) was significantly positively correlated with pCO2 and significantly inversely correlated with temperature, with pCO2 functioning as the primary factor and temperature moderating the pCO2-delta13C relationship. Echinoid calcification rate was significantly inversely correlated with both delta18O and delta13C, both within treatments (i.e., pCO2 and temperature fixed) and across treatments (i.e., with effects of pCO2 and temperature controlled for through ANOVA). Therefore, calcification rate and potentially the rate of co-occurring dissolution appear to be important drivers of the kinetic isotope effects observed in the echinoid spines. Study results suggest that echinoid delta18O monitors seawater temperature, but not atmospheric pCO2, and that echinoid delta13C monitors atmospheric pCO2, with temperature moderating this relationship. These findings, coupled with echinoids' long and generally high-quality fossil record, supports prior assertions that fossil echinoid delta18O is a viable archive of paleo-seawater temperature throughout Phanerozoic time, and that delta13C merits further investigation as a potential proxy of paleo-atmospheric pCO2. However, the apparent impact of calcification rate on echinoid delta18O and delta13C suggests that paleoceanographic reconstructions derived from these proxies in fossil echinoids could be improved by incorporating the effects of growth rate.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The exponential growth of studies on the biological response to ocean acidification over the last few decades has generated a large amount of data. To facilitate data comparison, a data compilation hosted at the data publisher PANGAEA was initiated in 2008 and is updated on a regular basis (doi:10.1594/PANGAEA.149999). By January 2015, a total of 581 data sets (over 4 000 000 data points) from 539 papers had been archived. Here we present the developments of this data compilation five years since its first description by Nisumaa et al. (2010). Most of study sites from which data archived are still in the Northern Hemisphere and the number of archived data from studies from the Southern Hemisphere and polar oceans are still relatively low. Data from 60 studies that investigated the response of a mix of organisms or natural communities were all added after 2010, indicating a welcomed shift from the study of individual organisms to communities and ecosystems. The initial imbalance of considerably more data archived on calcification and primary production than on other processes has improved. There is also a clear tendency towards more data archived from multifactorial studies after 2010. For easier and more effective access to ocean acidification data, the ocean acidification community is strongly encouraged to contribute to the data archiving effort, and help develop standard vocabularies describing the variables and define best practices for archiving ocean acidification data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Alzheimer’s Disease and other dementias are one of the most challenging illnesses confronting countries with ageing populations. Treatment options for dementia are limited, and the costs are significant. There is a growing need to develop new treatments for dementia, especially for the elderly. There is also growing evidence that centrally acting angiotensin converting enzyme (ACE) inhibitors, which cross the blood-brain barrier, are associated with a reduced rate of cognitive and functional decline in dementia, especially in Alzheimer’s disease (AD). The aim of this research is to investigate the effects of centrally acting ACE inhibitors (CACE-Is) on the rate of cognitive and functional decline in dementia, using a three phased KDD process. KDD, as a scientific way to process and analysis clinical data, is used to find useful insights from a variety of clinical databases. The data used are from three clinic databases: Geriatric Assessment Tool (GAT), the Doxycycline and Rifampin for Alzheimer’s Disease (DARAD), and the Qmci validation databases, which were derived from several different geriatric clinics in Canada. This research involves patients diagnosed with AD, vascular or mixed dementia only. Patients were included if baseline and end-point (at least six months apart) Standardised Mini-Mental State Examination (SMMSE), Quick Mild Cognitive Impairment (Qmci) or Activities Daily Living (ADL) scores were available. Basically, the rates of change are compared between patients taking CACE-Is, and those not currently treated with CACE-Is. The results suggest that there is a statistically significant difference in the rate of decline in cognitive and functional scores between CACE-I and NoCACE-I patients. This research also validates that the Qmci, a new short assessment test, has potential to replace the current popular screening tests for cognition in the clinic and clinical trials.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present fast functional photoacoustic microscopy (PAM) for three-dimensional high-resolution, high-speed imaging of the mouse brain, complementary to other imaging modalities. We implemented a single-wavelength pulse-width-based method with a one-dimensional imaging rate of 100 kHz to image blood oxygenation with capillary-level resolution. We applied PAM to image the vascular morphology, blood oxygenation, blood flow and oxygen metabolism in both resting and stimulated states in the mouse brain.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Human cadavers have long been used to teach human anatomy and are increasingly used in other disciplines. Different embalming techniques have been reported in the literature; however there is no clear consensus on the opinion of anatomists on the utility of embalmed cadavers for the teaching of anatomy. To this end, we aimed to survey British and Irish anatomy teachers to report their opinions on different preservation methods for the teaching of anatomy. In this project eight human cadavers were embalmed using formalin, Genelyn, Thiel and Imperial College London- Soft Preserving (ICL-SP) techniques to compare different characteristics of these four techniques. The results of this thesis show that anatomy teachers consider hard-fixed cadavers not to be the most accurate teaching model in comparison to the human body, although it still serves as a useful teaching method (Chapter 2). In addition, our findings confirm that joints of cadavers embalmed using ICL-SP solution faithfully mimics joints of an unembalmed cadaver compared to the other techniques (Chapter 3). Embalming a human body prevents the deterioration in the quality of images and our findings highlight that the influence of the embalming solutions varied with the radiological modality used (Chapter 4). The method developed as part of this thesis enables anatomists and forensic scientists to quantify the decomposition rate of an embalmed human cadaver (Chapter 5). Formalin embalming solution showed the strongest antimicrobial abilities followed by Thiel, Genelyn and finally by ICL-SP (Chapter 6). The overarching viewpoint of this set of studies show that it is inaccurate to state that one embalming technique is ultimately the best. The value of each technique differs based on the requirement of the particular education or research area. Hence we highlight how different embalming techniques may be better suited to certain fields of study.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A class of lifetime distributions which has received considerable attention in modelling and analysis of lifetime data is the class of lifetime distributions with bath-tub shaped failure rate functions because of their extensive applications. The purpose of this thesis was to introduce a new class of bivariate lifetime distributions with bath-tub shaped failure rates (BTFRFs). In this research, first we reviewed univariate lifetime distributions with bath-tub shaped failure rates, and several multivariate extensions of a univariate failure rate function. Then we introduced a new class of bivariate distributions with bath-tub shaped failure rates (hazard gradients). Specifically, the new class of bivariate lifetime distributions were developed using the method of Morgenstern’s method of defining bivariate class of distributions with given marginals. The computer simulations and numerical computations were used to investigate the properties of these distributions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The MAREDAT atlas covers 11 types of plankton, ranging in size from bacteria to jellyfish. Together, these plankton groups determine the health and productivity of the global ocean and play a vital role in the global carbon cycle. Working within a uniform and consistent spatial and depth grid (map) of the global ocean, the researchers compiled thousands and tens of thousands of data points to identify regions of plankton abundance and scarcity as well as areas of data abundance and scarcity. At many of the grid points, the MAREDAT team accomplished the difficult conversion from abundance (numbers of organisms) to biomass (carbon mass of organisms). The MAREDAT atlas provides an unprecedented global data set for ecological and biochemical analysis and modeling as well as a clear mandate for compiling additional existing data and for focusing future data gathering efforts on key groups in key areas of the ocean. The present data set presents depth integrated values of diazotrophs nitrogen fixation rates, computed from a collection of source data sets.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE: This study sought to establish whether functional analysis of the ATM-p53-p21 pathway adds to the information provided by currently available prognostic factors in patients with chronic lymphocytic leukemia (CLL) requiring frontline chemotherapy. EXPERIMENTAL DESIGN: Cryopreserved blood mononuclear cells from 278 patients entering the LRF CLL4 trial comparing chlorambucil, fludarabine, and fludarabine plus cyclophosphamide were analyzed for ATM-p53-p21 pathway defects using an ex vivo functional assay that uses ionizing radiation to activate ATM and flow cytometry to measure upregulation of p53 and p21 proteins. Clinical endpoints were compared between groups of patients defined by their pathway status. RESULTS: ATM-p53-p21 pathway defects of four different types (A, B, C, and D) were identified in 194 of 278 (70%) samples. The type A defect (high constitutive p53 expression combined with impaired p21 upregulation) and the type C defect (impaired p21 upregulation despite an intact p53 response) were each associated with short progression-free survival. The type A defect was associated with chemoresistance, whereas the type C defect was associated with early relapse. As expected, the type A defect was strongly associated with TP53 deletion/mutation. In contrast, the type C defect was not associated with any of the other prognostic factors examined, including TP53/ATM deletion, TP53 mutation, and IGHV mutational status. Detection of the type C defect added to the prognostic information provided by TP53/ATM deletion, TP53 mutation, and IGHV status. CONCLUSION: Our findings implicate blockade of the ATM-p53-p21 pathway at the level of p21 as a hitherto unrecognized determinant of early disease recurrence following successful cytoreduction.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Alkali tantalates and niobates, including K(Ta / Nb)O3, Li(Ta / Nb)O3 and Na(Ta / Nb)O3, are a very promising ferroic family of lead-free compounds with perovskite-like structures. Their versatile properties make them potentially interesting for current and future application in microelectronics, photocatalysis, energy and biomedics. Among them potassium tantalate, KTaO3 (KTO), has been raising interest as an alternative for the well-known strontium titanate, SrTiO3 (STO). KTO is a perovskite oxide with a quantum paraelectric behaviour when electrically stimulated and a highly polarizable lattice, giving opportunity to tailor its properties via external or internal stimuli. However problems related with the fabrication of either bulk or 2D nanostructures makes KTO not yet a viable alternative to STO. Within this context and to contribute scientifically to the leverage tantalate based compounds applications, the main goals of this thesis are: i) to produce and characterise thin films of alkali tantalates by chemical solution deposition on rigid Si based substrates, at reduced temperatures to be compatible with Si technology, ii) to fulfil scientific knowledge gaps in these relevant functional materials related to their energetics and ii) to exploit alternative applications for alkali tantalates, as photocatalysis. In what concerns the synthesis attention was given to the understanding of the phase formation in potassium tantalate synthesized via distinct routes, to control the crystallization of desired perovskite structure and to avoid low temperature pyrochlore or K-deficient phases. The phase formation process in alkali tantalates is far from being deeply analysed, as in the case of Pb-containing perovskites, therefore the work was initially focused on the process-phase relationship to identify the driving forces responsible to regulate the synthesis. Comparison of phase formation paths in conventional solid-state reaction and sol-gel method was conducted. The structural analyses revealed that intermediate pyrochlore K2Ta2O6 structure is not formed at any stage of the reaction using conventional solid-state reaction. On the other hand in the solution based processes, as alkoxide-based route, the crystallization of the perovskite occurs through the intermediate pyrochlore phase; at low temperatures pyrochlore is dominant and it is transformed to perovskite at >800 °C. The kinetic analysis carried out by using Johnson-MehlAvrami-Kolmogorow model and quantitative X-ray diffraction (XRD) demonstrated that in sol-gel derived powders the crystallization occurs in two stages: i) at early stage of the reaction dominated by primary nucleation, the mechanism is phase-boundary controlled, and ii) at the second stage the low value of Avrami exponent, n ~ 0.3, does not follow any reported category, thus not permitting an easy identification of the mechanism. Then, in collaboration with Prof. Alexandra Navrotsky group from the University of California at Davis (USA), thermodynamic studies were conducted, using high temperature oxide melt solution calorimetry. The enthalpies of formation of three structures: pyrochlore, perovskite and tetragonal tungsten bronze K6Ta10.8O30 (TTB) were calculated. The enthalpies of formation from corresponding oxides, ∆Hfox, for KTaO3, KTa2.2O6 and K6Ta10.8O30 are -203.63 ± 2.84 kJ/mol, - 358.02 ± 3.74 kJ/mol, and -1252.34 ± 10.10 kJ/mol, respectively, whereas from elements, ∆Hfel, for KTaO3, KTa2.2O6 and K6Ta10.8O30 are -1408.96 ± 3.73 kJ/mol, -2790.82 ± 6.06 kJ/mol, and -13393.04 ± 31.15 kJ/mol, respectively. The possible decomposition reactions of K-deficient KTa2.2O6 pyrochlore to KTaO3 perovskite and Ta2O5 (reaction 1) or to TTB K6Ta10.8O30 and Ta2O5 (reaction 2) were proposed, and the enthalpies were calculated to be 308.79 ± 4.41 kJ/mol and 895.79 ± 8.64 kJ/mol for reaction 1 and reaction 2, respectively. The reactions are strongly endothermic, indicating that these decompositions are energetically unfavourable, since it is unlikely that any entropy term could override such a large positive enthalpy. The energetic studies prove that pyrochlore is energetically more stable phase than perovskite at low temperature. Thus, the local order of the amorphous precipitates drives the crystallization into the most favourable structure that is the pyrochlore one with similar local organization; the distance between nearest neighbours in the amorphous or short-range ordered phase is very close to that in pyrochlore. Taking into account the stoichiometric deviation in KTO system, the selection of the most appropriate fabrication / deposition technique in thin films technology is a key issue, especially concerning complex ferroelectric oxides. Chemical solution deposition has been widely reported as a processing method to growth KTO thin films, but classical alkoxide route allows to crystallize perovskite phase at temperatures >800 °C, while the temperature endurance of platinized Si wafers is ~700 °C. Therefore, alternative diol-based routes, with distinct potassium carboxylate precursors, was developed aiming to stabilize the precursor solution, to avoid using toxic solvents and to decrease the crystallization temperature of the perovskite phase. Studies on powders revealed that in the case of KTOac (solution based on potassium acetate), a mixture of perovskite and pyrochlore phases is detected at temperature as low as 450 °C, and gradual transformation into monophasic perovskite structure occurs as temperature increases up to 750 °C, however the desired monophasic KTaO3 perovskite phase is not achieved. In the case of KTOacac (solution with potassium acetylacetonate), a broad peak is detected at temperatures <650 °C, characteristic of amorphous structures, while at higher temperatures diffraction lines from pyrochlore and perovskite phases are visible and a monophasic perovskite KTaO3 is formed at >700 °C. Infrared analysis indicated that the differences are due to a strong deformation of the carbonate-based structures upon heating. A series of thin films of alkali tantalates were spin-coated onto Si-based substrates using diol-based routes. Interestingly, monophasic perovskite KTaO3 films deposited using KTOacac solution were obtained at temperature as low as 650 °C; films were annealed in rapid thermal furnace in oxygen atmosphere for 5 min with heating rate 30 °C/sec. Other compositions of the tantalum based system as LiTaO3 (LTO) and NaTaO3 (NTO), were successfully derived as well, onto Si substrates at 650 °C as well. The ferroelectric character of LTO at room temperature was proved. Some of dielectric properties of KTO could not be measured in parallel capacitor configuration due to either substrate-film or filmelectrode interfaces. Thus, further studies have to be conducted to overcome this issue. Application-oriented studies have also been conducted; two case studies: i) photocatalytic activity of alkali tantalates and niobates for decomposition of pollutant, and ii) bioactivity of alkali tantalate ferroelectric films as functional coatings for bone regeneration. Much attention has been recently paid to develop new type of photocatalytic materials, and tantalum and niobium oxide based compositions have demonstrated to be active photocatalysts for water splitting due to high potential of the conduction bands. Thus, various powders of alkali tantalates and niobates families were tested as catalysts for methylene blue degradation. Results showed promising activities for some of the tested compounds, and KNbO3 is the most active among them, reaching over 50 % degradation of the dye after 7 h under UVA exposure. However further modifications of powders can improve the performance. In the context of bone regeneration, it is important to have platforms that with appropriate stimuli can support the attachment and direct the growth, proliferation and differentiation of the cells. In lieu of this here we exploited an alternative strategy for bone implants or repairs, based on charged mediating signals for bone regeneration. This strategy includes coating metallic 316L-type stainless steel (316L-SST) substrates with charged, functionalized via electrical charging or UV-light irradiation, ferroelectric LiTaO3 layers. It was demonstrated that the formation of surface calcium phosphates and protein adsorption is considerably enhanced for 316L-SST functionalized ferroelectric coatings. Our approach can be viewed as a set of guidelines for the development of platforms electrically functionalized that can stimulate tissue regeneration promoting direct integration of the implant in the host tissue by bone ingrowth and, hence contributing ultimately to reduce implant failure.