833 resultados para Robustness


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study investigated fingermark residues using Fourier transform infrared microscopy (μ- FTIR) in order to obtain fundamental information about the marks' initial composition and aging kinetics. This knowledge would be an asset for fundamental research on fingermarks, such as for dating purposes. Attenuated Total Reflection (ATR) and single-point reflection modes were tested on fresh fingermarks. ATR proved to be better suited and this mode was subsequently selected for further aging studies. Eccrine and sebaceous material was found in fresh and aged fingermarks and the spectral regions 1000-1850 cm-1 and 2700-3600 cm-1 were identified as the most informative. The impact of substrates (aluminium and glass slides) and storage conditions (storage in the light and in the dark) on fingermark aging was also studied. Chemometric analyses showed that fingermarks could be grouped according to their age regardless of the substrate when they were stored in an open box kept in an air-conditioned laboratory at around 20°C next to a window. On the contrary, when fingermarks were stored in the dark, only specimens deposited on the same substrate could be grouped by age. Thus, the substrate appeared to influence aging of fingermarks in the dark. Furthermore, PLS regression analyses were conducted in order to study the possibility of modelling fingermark aging for potential fingermark dating applications. The resulting models showed an overall precision of ±3 days and clearly demonstrated their capability to differentiate older fingermarks (20 and 34-days old) from newer ones (1, 3, 7 and 9-days old) regardless of the substrate and lighting conditions. These results are promising from a fingermark dating perspective. Further research is required to fully validate such models and assess their robustness and limitations in uncontrolled casework conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Concentration gradients provide spatial information for tissue patterning and cell organization, and their robustness under natural fluctuations is an evolutionary advantage. In rod-shaped Schizosaccharomyces pombe cells, the DYRK-family kinase Pom1 gradients control cell division timing and placement. Upon dephosphorylation by a Tea4-phosphatase complex, Pom1 associates with the plasma membrane at cell poles, where it diffuses and detaches upon auto-phosphorylation. Here, we demonstrate that Pom1 auto-phosphorylates intermolecularly, both in vitro and in vivo, which confers robustness to the gradient. Quantitative imaging reveals this robustness through two system's properties: The Pom1 gradient amplitude is inversely correlated with its decay length and is buffered against fluctuations in Tea4 levels. A theoretical model of Pom1 gradient formation through intermolecular auto-phosphorylation predicts both properties qualitatively and quantitatively. This provides a telling example where gradient robustness through super-linear decay, a principle hypothesized a decade ago, is achieved through autocatalysis. Concentration-dependent autocatalysis may be a widely used simple feedback to buffer biological activities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

L'imagerie par résonance magnétique (IRM) peut fournir aux cardiologues des informations diagnostiques importantes sur l'état de la maladie de l'artère coronarienne dans les patients. Le défi majeur pour l'IRM cardiaque est de gérer toutes les sources de mouvement qui peuvent affecter la qualité des images en réduisant l'information diagnostique. Cette thèse a donc comme but de développer des nouvelles techniques d'acquisitions des images IRM, en changeant les techniques de compensation du mouvement, pour en augmenter l'efficacité, la flexibilité, la robustesse et pour obtenir plus d'information sur le tissu et plus d'information temporelle. Les techniques proposées favorisent donc l'avancement de l'imagerie des coronaires dans une direction plus maniable et multi-usage qui peut facilement être transférée dans l'environnement clinique. La première partie de la thèse s'est concentrée sur l'étude du mouvement des artères coronariennes sur des patients en utilisant la techniques d'imagerie standard (rayons x), pour mesurer la précision avec laquelle les artères coronariennes retournent dans la même position battement après battement (repositionnement des coronaires). Nous avons découvert qu'il y a des intervalles dans le cycle cardiaque, tôt dans la systole et à moitié de la diastole, où le repositionnement des coronaires est au minimum. En réponse nous avons développé une nouvelle séquence d'acquisition (T2-post) capable d'acquérir les données aussi tôt dans la systole. Cette séquence a été testée sur des volontaires sains et on a pu constater que la qualité de visualisation des artère coronariennes est égale à celle obtenue avec les techniques standard. De plus, le rapport signal sur bruit fourni par la séquence d'acquisition proposée est supérieur à celui obtenu avec les techniques d'imagerie standard. La deuxième partie de la thèse a exploré un paradigme d'acquisition des images cardiaques complètement nouveau pour l'imagerie du coeur entier. La technique proposée dans ce travail acquiert les données sans arrêt (free-running) au lieu d'être synchronisée avec le mouvement cardiaque. De cette façon, l'efficacité de la séquence d'acquisition est augmentée de manière significative et les images produites représentent le coeur entier dans toutes les phases cardiaques (quatre dimensions, 4D). Par ailleurs, l'auto-navigation de la respiration permet d'effectuer cette acquisition en respiration libre. Cette technologie rend possible de visualiser et évaluer l'anatomie du coeur et de ses vaisseaux ainsi que la fonction cardiaque en quatre dimensions et avec une très haute résolution spatiale et temporelle, sans la nécessité d'injecter un moyen de contraste. Le pas essentiel qui a permis le développement de cette technique est l'utilisation d'une trajectoire d'acquisition radiale 3D basée sur l'angle d'or. Avec cette trajectoire, il est possible d'acquérir continûment les données d'espace k, puis de réordonner les données et choisir les paramètres temporel des images 4D a posteriori. L'acquisition 4D a été aussi couplée avec un algorithme de reconstructions itératif (compressed sensing) qui permet d'augmenter la résolution temporelle tout en augmentant la qualité des images. Grâce aux images 4D, il est possible maintenant de visualiser les artères coronariennes entières dans chaque phase du cycle cardiaque et, avec les mêmes données, de visualiser et mesurer la fonction cardiaque. La qualité des artères coronariennes dans les images 4D est la même que dans les images obtenues avec une acquisition 3D standard, acquise en diastole Par ailleurs, les valeurs de fonction cardiaque mesurées au moyen des images 4D concorde avec les valeurs obtenues avec les images 2D standard. Finalement, dans la dernière partie de la thèse une technique d'acquisition a temps d'écho ultra-court (UTE) a été développée pour la visualisation in vivo des calcifications des artères coronariennes. Des études récentes ont démontré que les acquisitions UTE permettent de visualiser les calcifications dans des plaques athérosclérotiques ex vivo. Cepandent le mouvement du coeur a entravé jusqu'à maintenant l'utilisation des techniques UTE in vivo. Pour résoudre ce problème nous avons développé une séquence d'acquisition UTE avec trajectoire radiale 3D et l'avons testée sur des volontaires. La technique proposée utilise une auto-navigation 3D pour corriger le mouvement respiratoire et est synchronisée avec l'ECG. Trois échos sont acquis pour extraire le signal de la calcification avec des composants au T2 très court tout en permettant de séparer le signal de la graisse depuis le signal de l'eau. Les résultats sont encore préliminaires mais on peut affirmer que la technique développé peut potentiellement montrer les calcifications des artères coronariennes in vivo. En conclusion, ce travail de thèse présente trois nouvelles techniques pour l'IRM du coeur entier capables d'améliorer la visualisation et la caractérisation de la maladie athérosclérotique des coronaires. Ces techniques fournissent des informations anatomiques et fonctionnelles en quatre dimensions et des informations sur la composition du tissu auparavant indisponibles. CORONARY artery magnetic resonance imaging (MRI) has the potential to provide the cardiologist with relevant diagnostic information relative to coronary artery disease of patients. The major challenge of cardiac MRI, though, is dealing with all sources of motions that can corrupt the images affecting the diagnostic information provided. The current thesis, thus, focused on the development of new MRI techniques that change the standard approach to cardiac motion compensation in order to increase the efficiency of cardioavscular MRI, to provide more flexibility and robustness, new temporal information and new tissue information. The proposed approaches help in advancing coronary magnetic resonance angiography (MRA) in the direction of an easy-to-use and multipurpose tool that can be translated to the clinical environment. The first part of the thesis focused on the study of coronary artery motion through gold standard imaging techniques (x-ray angiography) in patients, in order to measure the precision with which the coronary arteries assume the same position beat after beat (coronary artery repositioning). We learned that intervals with minimal coronary artery repositioning occur in peak systole and in mid diastole and we responded with a new pulse sequence (T2~post) that is able to provide peak-systolic imaging. Such a sequence was tested in healthy volunteers and, from the image quality comparison, we learned that the proposed approach provides coronary artery visualization and contrast-to-noise ratio (CNR) comparable with the standard acquisition approach, but with increased signal-to-noise ratio (SNR). The second part of the thesis explored a completely new paradigm for whole- heart cardiovascular MRI. The proposed techniques acquires the data continuously (free-running), instead of being triggered, thus increasing the efficiency of the acquisition and providing four dimensional images of the whole heart, while respiratory self navigation allows for the scan to be performed in free breathing. This enabling technology allows for anatomical and functional evaluation in four dimensions, with high spatial and temporal resolution and without the need for contrast agent injection. The enabling step is the use of a golden-angle based 3D radial trajectory, which allows for a continuous sampling of the k-space and a retrospective selection of the timing parameters of the reconstructed dataset. The free-running 4D acquisition was then combined with a compressed sensing reconstruction algorithm that further increases the temporal resolution of the 4D dataset, while at the same time increasing the overall image quality by removing undersampling artifacts. The obtained 4D images provide visualization of the whole coronary artery tree in each phases of the cardiac cycle and, at the same time, allow for the assessment of the cardiac function with a single free- breathing scan. The quality of the coronary arteries provided by the frames of the free-running 4D acquisition is in line with the one obtained with the standard ECG-triggered one, and the cardiac function evaluation matched the one measured with gold-standard stack of 2D cine approaches. Finally, the last part of the thesis focused on the development of ultrashort echo time (UTE) acquisition scheme for in vivo detection of calcification in the coronary arteries. Recent studies showed that UTE imaging allows for the coronary artery plaque calcification ex vivo, since it is able to detect the short T2 components of the calcification. The heart motion, though, prevented this technique from being applied in vivo. An ECG-triggered self-navigated 3D radial triple- echo UTE acquisition has then been developed and tested in healthy volunteers. The proposed sequence combines a 3D self-navigation approach with a 3D radial UTE acquisition enabling data collection during free breathing. Three echoes are simultaneously acquired to extract the short T2 components of the calcification while a water and fat separation technique allows for proper visualization of the coronary arteries. Even though the results are still preliminary, the proposed sequence showed great potential for the in vivo visualization of coronary artery calcification. In conclusion, the thesis presents three novel MRI approaches aimed at improved characterization and assessment of atherosclerotic coronary artery disease. These approaches provide new anatomical and functional information in four dimensions, and support tissue characterization for coronary artery plaques.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, the fusion of a stochastic metaheuristic as Simulated Annealing (SA) with classical criteria for convergence of Blind Separation of Sources (BSS), is shown. Although the topic of BSS, by means of various techniques, including ICA, PCA, and neural networks, has been amply discussed in the literature, to date the possibility of using simulated annealing algorithms has not been seriously explored. From experimental results, this paper demonstrates the possible benefits offered by SA in combination with high order statistical and mutual information criteria for BSS, such as robustness against local minima and a high degree of flexibility in the energy function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although fetal anatomy can be adequately viewed in new multi-slice MR images, many critical limitations remain for quantitative data analysis. To this end, several research groups have recently developed advanced image processing methods, often denoted by super-resolution (SR) techniques, to reconstruct from a set of clinical low-resolution (LR) images, a high-resolution (HR) motion-free volume. It is usually modeled as an inverse problem where the regularization term plays a central role in the reconstruction quality. Literature has been quite attracted by Total Variation energies because of their ability in edge preserving but only standard explicit steepest gradient techniques have been applied for optimization. In a preliminary work, it has been shown that novel fast convex optimization techniques could be successfully applied to design an efficient Total Variation optimization algorithm for the super-resolution problem. In this work, two major contributions are presented. Firstly, we will briefly review the Bayesian and Variational dual formulations of current state-of-the-art methods dedicated to fetal MRI reconstruction. Secondly, we present an extensive quantitative evaluation of our SR algorithm previously introduced on both simulated fetal and real clinical data (with both normal and pathological subjects). Specifically, we study the robustness of regularization terms in front of residual registration errors and we also present a novel strategy for automatically select the weight of the regularization as regards the data fidelity term. Our results show that our TV implementation is highly robust in front of motion artifacts and that it offers the best trade-off between speed and accuracy for fetal MRI recovery as in comparison with state-of-the art methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Topological order has proven a useful concept to describe quantum phase transitions which are not captured by the Ginzburg-Landau type of symmetry-breaking order. However, lacking a local order parameter, topological order is hard to detect. One way to detect it is via direct observation of anyonic properties of excitations which are usually discussed in the thermodynamic limit, but so far has not been realized in macroscopic quantum Hall samples. Here we consider a system of few interacting bosons subjected to the lowest Landau level by a gauge potential, and theoretically investigate vortex excitations in order to identify topological properties of different ground states. Our investigation demonstrates that even in surprisingly small systems anyonic properties are able to characterize the topological order. In addition, focusing on a system in the Laughlin state, we study the robustness of its anyonic behavior in the presence of tunable finite-range interactions acting as a perturbation. A clear signal of a transition to a different state is reflected by the system's anyonic properties.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Daily rhythmicity in the locomotor activity of laboratory animals has been studied in great detail for many decades, but the daily pattern of locomotor activity has not received as much attention in humans. We collected waist-worn accelerometer data from more than 2000 individuals from five countries differing in socioeconomic development and conducted a detailed analysis of human locomotor activity. Body mass index (BMI) was computed from height and weight. Individual activity records lasting 7 days were subjected to cosinor analysis to determine the parameters of the daily activity rhythm: mesor (mean level), amplitude (half the range of excursion), acrophase (time of the peak) and robustness (rhythm strength). The activity records of all individual participants exhibited statistically significant 24-h rhythmicity, with activity increasing noticeably a few hours after sunrise and dropping off around the time of sunset, with a peak at 1:42 pm on average. The acrophase of the daily rhythm was comparable in men and women in each country but varied by as much as 3 h from country to country. Quantification of the socioeconomic stages of the five countries yielded suggestive evidence that more developed countries have more obese residents, who are less active, and who are active later in the day than residents from less developed countries. These results provide a detailed characterization of the daily activity pattern of individual human beings and reveal similarities and differences among people from five countries differing in socioeconomic development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: For free-breathing cardiovascular magnetic resonance (CMR), the self-navigation technique recently emerged, which is expected to deliver high-quality data with a high success rate. The purpose of this study was to test the hypothesis that self-navigated 3D-CMR enables the reliable assessment of cardiovascular anatomy in patients with congenital heart disease (CHD) and to define factors that affect image quality. METHODS: CHD patients ≥2 years-old and referred for CMR for initial assessment or for a follow-up study were included to undergo a free-breathing self-navigated 3D CMR at 1.5T. Performance criteria were: correct description of cardiac segmental anatomy, overall image quality, coronary artery visibility, and reproducibility of great vessels diameter measurements. Factors associated with insufficient image quality were identified using multivariate logistic regression. RESULTS: Self-navigated CMR was performed in 105 patients (55% male, 23 ± 12y). Correct segmental description was achieved in 93% and 96% for observer 1 and 2, respectively. Diagnostic quality was obtained in 90% of examinations, and it increased to 94% if contrast-enhanced. Left anterior descending, circumflex, and right coronary arteries were visualized in 93%, 87% and 98%, respectively. Younger age, higher heart rate, lower ejection fraction, and lack of contrast medium were independently associated with reduced image quality. However, a similar rate of diagnostic image quality was obtained in children and adults. CONCLUSION: In patients with CHD, self-navigated free-breathing CMR provides high-resolution 3D visualization of the heart and great vessels with excellent robustness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

PURPOSE: Because desmoid tumors exhibit an unpredictable clinical course, translational research is crucial to identify the predictive factors of progression in addition to the clinical parameters. The main issue is to detect patients who are at a higher risk of progression. The aim of this work was to identify molecular markers that can predict progression-free survival (PFS). EXPERIMENTAL DESIGN: Gene-expression screening was conducted on 115 available independent untreated primary desmoid tumors using cDNA microarray. We established a prognostic gene-expression signature composed of 36 genes. To test robustness, we randomly generated 1,000 36-gene signatures and compared their outcome association to our define 36-genes molecular signature and we calculated positive predictive value (PPV) and negative predictive value (NPV). RESULTS: Multivariate analysis showed that our molecular signature had a significant impact on PFS while no clinical factor had any prognostic value. Among the 1,000 random signatures generated, 56.7% were significant and none was more significant than our 36-gene molecular signature. PPV and NPV were high (75.58% and 81.82%, respectively). Finally, the top two genes downregulated in no-recurrence were FECH and STOML2 and the top gene upregulated in no-recurrence was TRIP6. CONCLUSIONS: By analyzing expression profiles, we have identified a gene-expression signature that is able to predict PFS. This tool may be useful for prospective clinical studies. Clin Cancer Res; 21(18); 4194-200. ©2015 AACR.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We have investigated the behavior of bistable cells made up of four quantum dots and occupied by two electrons, in the presence of realistic confinement potentials produced by depletion gates on top of a GaAs/AlGaAs heterostructure. Such a cell represents the basic building block for logic architectures based on the concept of quantum cellular automata (QCA) and of ground state computation, which have been proposed as an alternative to traditional transistor-based logic circuits. We have focused on the robustness of the operation of such cells with respect to asymmetries derived from fabrication tolerances. We have developed a two-dimensional model for the calculation of the electron density in a driven cell in response to the polarization state of a driver cell. Our method is based on the one-shot configuration-interaction technique, adapted from molecular chemistry. From the results of our simulations, we conclude that an implementation of QCA logic based on simple ¿hole arrays¿ is not feasible, because of the extreme sensitivity to fabrication tolerances. As an alternative, we propose cells defined by multiple gates, where geometrical asymmetries can be compensated for by adjusting the bias voltages. Even though not immediately applicable to the implementation of logic gates and not suitable for large scale integration, the proposed cell layout should allow an experimental demonstration of a chain of QCA cells.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lexical diversity measures are notoriously sensitive to variations of sample size and recent approaches to this issue typically involve the computation of the average variety of lexical units in random subsamples of fixed size. This methodology has been further extended to measures of inflectional diversity such as the average number of wordforms per lexeme, also known as the mean size of paradigm (MSP) index. In this contribution we argue that, while random sampling can indeed be used to increase the robustness of inflectional diversity measures, using a fixed subsample size is only justified under the hypothesis that the corpora that we compare have the same degree of lexematic diversity. In the more general case where they may have differing degrees of lexematic diversity, a more sophisticated strategy can and should be adopted. A novel approach to the measurement of inflectional diversity is proposed, aiming to cope not only with variations of sample size, but also with variations of lexematic diversity. The robustness of this new method is empirically assessed and the results show that while there is still room for improvement, the proposed methodology considerably attenuates the impact of lexematic diversity discrepancies on the measurement of inflectional diversity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper tests the robustness of estimates of market access impact on regional variability in human capital, as previously derived in the NEG literature. Our hypothesis is that these estimates of the coefficient of market access, in fact, capture the effects of regional differences in the industrial mix and the spatial dependence in the distribution of human capital. Results for the Spanish provinces indicate that the estimated impact of market access vanishes and becomes non-significant once these two elements are included in the empirical analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We provide robust and compelling evidence of the marked impact of tertiary education on the economic growth of less developed countries and of its the relatively smaller impact on the growth of developed ones. Our results argue in favor of the accumulation of high skill levels especially in technologically under-developed countries and, contrary to common wisdom, independently of the fact that these economies might initially produce lower-technology goods or perform technology imitation. Our results are robust to the different measures used in proxying human capital and to the adjustments made for cross-country differences in the quality of education. Country-specific institutional quality, as well as other indicators including legal origin, religious fractionalization and openness to trade have been used to control for the robustness of the results. These factors are also shown to speed up technology convergence thereby confirming previous empirical studies. Our estimates tackle problems of endogeneity by adopting a variety of techniques, including instrumental variables -for both panel and cross-section analyses- and the two-step efficient dynamics system GMM.