52 resultados para quantum fields
Resumo:
Gel electrophoresis allows one to separate knotted DNA (nicked circular) of equal length according to the knot type. At low electric fields, complex knots, being more compact, drift faster than simpler knots. Recent experiments have shown that the drift velocity dependence on the knot type is inverted when changing from low to high electric fields. We present a computer simulation on a lattice of a closed, knotted, charged DNA chain drifting in an external electric field in a topologically restricted medium. Using a Monte Carlo algorithm, the dependence of the electrophoretic migration of the DNA molecules on the knot type and on the electric field intensity is investigated. The results are in qualitative and quantitative agreement with electrophoretic experiments done under conditions of low and high electric fields.
Resumo:
A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.
Resumo:
Résumé La cryptographie classique est basée sur des concepts mathématiques dont la sécurité dépend de la complexité du calcul de l'inverse des fonctions. Ce type de chiffrement est à la merci de la puissance de calcul des ordinateurs ainsi que la découverte d'algorithme permettant le calcul des inverses de certaines fonctions mathématiques en un temps «raisonnable ». L'utilisation d'un procédé dont la sécurité est scientifiquement prouvée s'avère donc indispensable surtout les échanges critiques (systèmes bancaires, gouvernements,...). La cryptographie quantique répond à ce besoin. En effet, sa sécurité est basée sur des lois de la physique quantique lui assurant un fonctionnement inconditionnellement sécurisé. Toutefois, l'application et l'intégration de la cryptographie quantique sont un souci pour les développeurs de ce type de solution. Cette thèse justifie la nécessité de l'utilisation de la cryptographie quantique. Elle montre que le coût engendré par le déploiement de cette solution est justifié. Elle propose un mécanisme simple et réalisable d'intégration de la cryptographie quantique dans des protocoles de communication largement utilisés comme les protocoles PPP, IPSec et le protocole 802.1li. Des scénarios d'application illustrent la faisabilité de ces solutions. Une méthodologie d'évaluation, selon les critères communs, des solutions basées sur la cryptographie quantique est également proposée dans ce document. Abstract Classical cryptography is based on mathematical functions. The robustness of a cryptosystem essentially depends on the difficulty of computing the inverse of its one-way function. There is no mathematical proof that establishes whether it is impossible to find the inverse of a given one-way function. Therefore, it is mandatory to use a cryptosystem whose security is scientifically proven (especially for banking, governments, etc.). On the other hand, the security of quantum cryptography can be formally demonstrated. In fact, its security is based on the laws of physics that assure the unconditional security. How is it possible to use and integrate quantum cryptography into existing solutions? This thesis proposes a method to integrate quantum cryptography into existing communication protocols like PPP, IPSec and the 802.l1i protocol. It sketches out some possible scenarios in order to prove the feasibility and to estimate the cost of such scenarios. Directives and checkpoints are given to help in certifying quantum cryptography solutions according to Common Criteria.
Resumo:
Purpose: Previous studies of the visual outcome in bilateral non-arteritic anterior ischemic optic neuropathy (NAION) have yielded conflicting results, specifically regarding congruity between fellow eyes. Prior studies have used measures of acuity and computerized perimetry but none has compared Goldmann visual field outcomes between fellow eyes. In order to better define the concordance of visual loss in this condition, we reviewed our cases of bilateral sequential NAION, including measures of visual acuity, pupillary function and both pattern and severity of visual field loss.Methods: We performed a retrospective chart review of 102 patients with a diagnosis of bilateral sequential NAION. Of the 102 patients, 86 were included in the study for analysis of final visual outcome between the affected eyes. Visual function was assessed using visual acuity, Goldmann visual fields, color vision and RAPD. A quantitative total visual field score and score per quadrant was analyzed for each eye using the numerical Goldmann visual field scoring method previously described by Esterman and colleagues. Based upon these scores, we calculated the total deviation and pattern deviation between fellow eyes and between eyes of different patients. Statistical significance was determined using nonparametric tests.Results: A statistically significant correlation was found between fellow eyes for multiple parameters, including logMAR visual acuity (P = 0.0101), global visual field (P = 0.0001), superior visual field (P = 0.0001), and inferior visual field (P = 0.0001). In addition, the mean deviation of both total (P = 0.0000000007) and pattern (P = 0.000000004) deviation analyses was significantly less between fellow eyes ("intra"-eyes) than between eyes of different patients ("inter"-eyes).Conclusions: Visual function between fellow eyes showed a fair to moderate correlation that was statistically significant. The pattern of vision loss was also more similar in fellow eyes than between eyes of different patients. These results may help allow better prediction of visual outcome for the second eye in patients with NAION. These findings may also be useful for evaluating efficacy of therapeutic interventions.
Resumo:
This paper presents multiple kernel learning (MKL) regression as an exploratory spatial data analysis and modelling tool. The MKL approach is introduced as an extension of support vector regression, where MKL uses dedicated kernels to divide a given task into sub-problems and to treat them separately in an effective way. It provides better interpretability to non-linear robust kernel regression at the cost of a more complex numerical optimization. In particular, we investigate the use of MKL as a tool that allows us to avoid using ad-hoc topographic indices as covariables in statistical models in complex terrains. Instead, MKL learns these relationships from the data in a non-parametric fashion. A study on data simulated from real terrain features confirms the ability of MKL to enhance the interpretability of data-driven models and to aid feature selection without degrading predictive performances. Here we examine the stability of the MKL algorithm with respect to the number of training data samples and to the presence of noise. The results of a real case study are also presented, where MKL is able to exploit a large set of terrain features computed at multiple spatial scales, when predicting mean wind speed in an Alpine region.
Resumo:
The paper presents the Multiple Kernel Learning (MKL) approach as a modelling and data exploratory tool and applies it to the problem of wind speed mapping. Support Vector Regression (SVR) is used to predict spatial variations of the mean wind speed from terrain features (slopes, terrain curvature, directional derivatives) generated at different spatial scales. Multiple Kernel Learning is applied to learn kernels for individual features and thematic feature subsets, both in the context of feature selection and optimal parameters determination. An empirical study on real-life data confirms the usefulness of MKL as a tool that enhances the interpretability of data-driven models.
Resumo:
In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.
Resumo:
Due to the advances in sensor networks and remote sensing technologies, the acquisition and storage rates of meteorological and climatological data increases every day and ask for novel and efficient processing algorithms. A fundamental problem of data analysis and modeling is the spatial prediction of meteorological variables in complex orography, which serves among others to extended climatological analyses, for the assimilation of data into numerical weather prediction models, for preparing inputs to hydrological models and for real time monitoring and short-term forecasting of weather.In this thesis, a new framework for spatial estimation is proposed by taking advantage of a class of algorithms emerging from the statistical learning theory. Nonparametric kernel-based methods for nonlinear data classification, regression and target detection, known as support vector machines (SVM), are adapted for mapping of meteorological variables in complex orography.With the advent of high resolution digital elevation models, the field of spatial prediction met new horizons. In fact, by exploiting image processing tools along with physical heuristics, an incredible number of terrain features which account for the topographic conditions at multiple spatial scales can be extracted. Such features are highly relevant for the mapping of meteorological variables because they control a considerable part of the spatial variability of meteorological fields in the complex Alpine orography. For instance, patterns of orographic rainfall, wind speed and cold air pools are known to be correlated with particular terrain forms, e.g. convex/concave surfaces and upwind sides of mountain slopes.Kernel-based methods are employed to learn the nonlinear statistical dependence which links the multidimensional space of geographical and topographic explanatory variables to the variable of interest, that is the wind speed as measured at the weather stations or the occurrence of orographic rainfall patterns as extracted from sequences of radar images. Compared to low dimensional models integrating only the geographical coordinates, the proposed framework opens a way to regionalize meteorological variables which are multidimensional in nature and rarely show spatial auto-correlation in the original space making the use of classical geostatistics tangled.The challenges which are explored during the thesis are manifolds. First, the complexity of models is optimized to impose appropriate smoothness properties and reduce the impact of noisy measurements. Secondly, a multiple kernel extension of SVM is considered to select the multiscale features which explain most of the spatial variability of wind speed. Then, SVM target detection methods are implemented to describe the orographic conditions which cause persistent and stationary rainfall patterns. Finally, the optimal splitting of the data is studied to estimate realistic performances and confidence intervals characterizing the uncertainty of predictions.The resulting maps of average wind speeds find applications within renewable resources assessment and opens a route to decrease the temporal scale of analysis to meet hydrological requirements. Furthermore, the maps depicting the susceptibility to orographic rainfall enhancement can be used to improve current radar-based quantitative precipitation estimation and forecasting systems and to generate stochastic ensembles of precipitation fields conditioned upon the orography.
Resumo:
We investigated the association between exposure to radio-frequency electromagnetic fields (RF-EMFs) from broadcast transmitters and childhood cancer. First, we conducted a time-to-event analysis including children under age 16 years living in Switzerland on December 5, 2000. Follow-up lasted until December 31, 2008. Second, all children living in Switzerland for some time between 1985 and 2008 were included in an incidence density cohort. RF-EMF exposure from broadcast transmitters was modeled. Based on 997 cancer cases, adjusted hazard ratios in the time-to-event analysis for the highest exposure category (>0.2 V/m) as compared with the reference category (<0.05 V/m) were 1.03 (95% confidence interval (CI): 0.74, 1.43) for all cancers, 0.55 (95% CI: 0.26, 1.19) for childhood leukemia, and 1.68 (95% CI: 0.98, 2.91) for childhood central nervous system (CNS) tumors. Results of the incidence density analysis, based on 4,246 cancer cases, were similar for all types of cancer and leukemia but did not indicate a CNS tumor risk (incidence rate ratio = 1.03, 95% CI: 0.73, 1.46). This large census-based cohort study did not suggest an association between predicted RF-EMF exposure from broadcasting and childhood leukemia. Results for CNS tumors were less consistent, but the most comprehensive analysis did not suggest an association.
Resumo:
This paper presents a new and original variational framework for atlas-based segmentation. The proposed framework integrates both the active contour framework, and the dense deformation fields of optical flow framework. This framework is quite general and encompasses many of the state-of-the-art atlas-based segmentation methods. It also allows to perform the registration of atlas and target images based on only selected structures of interest. The versatility and potentiality of the proposed framework are demonstrated by presenting three diverse applications: In the first application, we show how the proposed framework can be used to simulate the growth of inconsistent structures like a tumor in an atlas. In the second application, we estimate the position of nonvisible brain structures based on the surrounding structures and validate the results by comparing with other methods. In the final application, we present the segmentation of lymph nodes in the Head and Neck CT images, and demonstrate how multiple registration forces can be used in this framework in an hierarchical manner.
Resumo:
The use of quantum dots (QDs) in the area of fingermark detection is currently receiving a lot of attention in the forensic literature. Most of the research efforts have been devoted to cadmium telluride (CdTe) quantum dots often applied as powders to the surfaces of interests. Both the use of cadmium and the nano size of these particles raise important issues in terms of health and safety. This paper proposes to replace CdTe QDs by zinc sulphide QDs doped with copper (ZnS:Cu) to address these issues. Zinc sulphide-copper doped QDs were successfully synthesized, characterized in terms of size and optical properties and optimized to be applied for the detection of impressions left in blood, where CdTe QDs proved to be efficient. Effectiveness of detection was assessed in comparison with CdTe QDs and Acid Yellow 7 (AY7, an effective blood reagent), using two series of depletive blood fingermarks from four donors prepared on four non-porous substrates, i.e. glass, transparent polypropylene, black polyethylene and aluminium foil. The marks were cut in half and processed separately with both reagents, leading to two comparison series (ZnS:Cu vs. CdTe, and ZnS:Cu vs. AY7). ZnS:Cu proved to be better than AY7 and at least as efficient as CdTe on most substrates. Consequently, copper-doped ZnS QDs constitute a valid substitute for cadmium-based QDs to detect blood marks on non-porous substrates and offer a safer alternative for routine use.
Resumo:
PURPOSE: To improve the traditional Nyquist ghost correction approach in echo planar imaging (EPI) at high fields, via schemes based on the reversal of the EPI readout gradient polarity for every other volume throughout a functional magnetic resonance imaging (fMRI) acquisition train. MATERIALS AND METHODS: An EPI sequence in which the readout gradient was inverted every other volume was implemented on two ultrahigh-field systems. Phantom images and fMRI data were acquired to evaluate ghost intensities and the presence of false-positive blood oxygenation level-dependent (BOLD) signal with and without ghost correction. Three different algorithms for ghost correction of alternating readout EPI were compared. RESULTS: Irrespective of the chosen processing approach, ghosting was significantly reduced (up to 70% lower intensity) in both rat brain images acquired on a 9.4T animal scanner and human brain images acquired at 7T, resulting in a reduction of sources of false-positive activation in fMRI data. CONCLUSION: It is concluded that at high B(0) fields, substantial gains in Nyquist ghost correction of echo planar time series are possible by alternating the readout gradient every other volume.