969 resultados para pre-image attack
Resumo:
提出了一个基于分组密码的hash函数体制,它的rate小于1但却具有更高的效率,同时,这个hash函数可以使用不安全的压缩函数进行构造,降低了对压缩函数安全性的要求.首先,在黑盒子模型下对这个新的体制的安全性进行了证明,然后给出了能够用于构造该体制的使用分组密码构造的压缩函数,最后通过实验对比发现,新hash函数的速度比rate为1的hash函数快得多.实验结果表明,除了rate以外,密钥编排也是影响基于分组密码hash函数效率的重要因素,甚至比rate影响更大.该体制只有两个密钥,不需要进行大量的密钥扩展运算,大大提高了基于分组密码hash函数的效率,而且该体制可以使用现有的分组密码来构造.
Resumo:
U.S. financial deregulation is often popularly presented as a fundamental attack on financial regulation that began with neoliberalism's Big Bang in 1980. This paper argues this position is wrong in two ways. First, it is a process that stretches back decades before 1980. Textbook mentions of 1970s precursor "financial innovations" fall far short of presenting the breadth and duration of the pre-1980 attack on the system of regulation. Second, it has not been an across-the-board attack on financial regulation in the name of market efficiency as required by its ideology and claimed by its advocates, but rather a focused attack on only one of the five pillars of the system of regulation. This paper develops both of these assertions through a presentation of the five central pillars of the pre-1980 system of financial regulation, and the four major attacks on the three different aspects of the restrictions on financial competition.
Resumo:
In this paper, a polynomial time algorithm is presented for solving the Eden problem for graph cellular automata. The algorithm is based on our neighborhood elimination operation which removes local neighborhood configurations which cannot be used in a pre-image of a given configuration. This paper presents a detailed derivation of our algorithm from first principles, and a detailed complexity and accuracy analysis is also given. In the case of time complexity, it is shown that the average case time complexity of the algorithm is \Theta(n^2), and the best and worst cases are \Omega(n) and O(n^3) respectively. This represents a vast improvement in the upper bound over current methods, without compromising average case performance.
Resumo:
This paper is devoted to the study of convergence properties of distances between points and the existence and uniqueness of best proximity and fixed points of the so-called semicyclic impulsive self-mappings on the union of a number of nonempty subsets in metric spaces. The convergences of distances between consecutive iterated points are studied in metric spaces, while those associated with convergence to best proximity points are set in uniformly convex Banach spaces which are simultaneously complete metric spaces. The concept of semicyclic self-mappings generalizes the well-known one of cyclic ones in the sense that the iterated sequences built through such mappings are allowed to have images located in the same subset as their pre-image. The self-mappings under study might be in the most general case impulsive in the sense that they are composite mappings consisting of two self-mappings, and one of them is eventually discontinuous. Thus, the developed formalism can be applied to the study of stability of a class of impulsive differential equations and that of their discrete counterparts. Some application examples to impulsive differential equations are also given.
Resumo:
Em muitas representações de objetos ou sistemas físicos se faz necessário a utilização de técnicas de redução de dimensionalidade que possibilitam a análise dos dados em baixas dimensões, capturando os parâmetros essenciais associados ao problema. No contexto de aprendizagem de máquina esta redução se destina primordialmente à clusterização, reconhecimento e reconstrução de sinais. Esta tese faz uma análise meticulosa destes tópicos e suas conexões que se encontram em verdadeira ebulição na literatura, sendo o mapeamento de difusão o foco principal deste trabalho. Tal método é construído a partir de um grafo onde os vértices são os sinais (dados do problema) e o peso das arestas é estabelecido a partir do núcleo gaussiano da equação do calor. Além disso, um processo de Markov é estabelecido o que permite a visualização do problema em diferentes escalas conforme variação de um determinado parâmetro t: Um outro parâmetro de escala, Є, para o núcleo gaussiano é avaliado com cuidado relacionando-o com a dinâmica de Markov de forma a poder aprender a variedade que eventualmente seja o suporte do dados. Nesta tese é proposto o reconhecimento de imagens digitais envolvendo transformações de rotação e variação de iluminação. Também o problema da reconstrução de sinais é atacado com a proposta de pré-imagem utilizando-se da otimização de uma função custo com um parâmetro regularizador, γ, que leva em conta também o conjunto de dados iniciais.
Resumo:
Este trabalho focou-se no estudo de técnicas de sub-espaço tendo em vista as aplicações seguintes: eliminação de ruído em séries temporais e extracção de características para problemas de classificação supervisionada. Foram estudadas as vertentes lineares e não-lineares das referidas técnicas tendo como ponto de partida os algoritmos SSA e KPCA. No trabalho apresentam-se propostas para optimizar os algoritmos, bem como uma descrição dos mesmos numa abordagem diferente daquela que é feita na literatura. Em qualquer das vertentes, linear ou não-linear, os métodos são apresentados utilizando uma formulação algébrica consistente. O modelo de subespaço é obtido calculando a decomposição em valores e vectores próprios das matrizes de kernel ou de correlação/covariância calculadas com um conjunto de dados multidimensional. A complexidade das técnicas não lineares de subespaço é discutida, nomeadamente, o problema da pre-imagem e a decomposição em valores e vectores próprios de matrizes de dimensão elevada. Diferentes algoritmos de préimagem são apresentados bem como propostas alternativas para a sua optimização. A decomposição em vectores próprios da matriz de kernel baseada em aproximações low-rank da matriz conduz a um algoritmo mais eficiente- o Greedy KPCA. Os algoritmos são aplicados a sinais artificiais de modo a estudar a influência dos vários parâmetros na sua performance. Para além disso, a exploração destas técnicas é extendida à eliminação de artefactos em séries temporais biomédicas univariáveis, nomeadamente, sinais EEG.
Resumo:
L'éclatement est une transformation jouant un rôle important en géométrie, car il permet de résoudre des singularités, de relier des variétés birationnellement équivalentes, et de construire des variétés possédant des propriétés inédites. Ce mémoire présente d'abord l'éclatement tel que développé en géométrie algébrique classique. Nous l'étudierons pour le cas des variétés affines et (quasi-)projectives, en un point, et le long d'un idéal et d'une sous-variété. Nous poursuivrons en étudiant l'extension de cette construction à la catégorie différentiable, sur les corps réels et complexes, en un point et le long d'une sous-variété. Nous conclurons cette section en explorant un exemple de résolution de singularité. Ensuite nous passerons à la catégorie symplectique, où nous ferons la même chose que pour le cas différentiable complexe, en portant une attention particulière à la forme symplectique définie sur la variété. Nous terminerons en étudiant un théorème dû à François Lalonde, où l'éclatement joue un rôle clé dans la démonstration. Ce théorème affirme que toute 4-variété fibrée par des 2-sphères sur une surface de Riemann, et différente du produit cartésien de deux 2-sphères, peut être équipée d'une 2-forme qui lui confère une structure symplectique réglée par des courbes holomorphes par rapport à sa structure presque complexe, et telle que l'aire symplectique de la base est inférieure à la capacité de la variété. La preuve repose sur l'utilisation de l'éclatement symplectique. En effet, en éclatant symplectiquement une boule contenue dans la 4-variété, il est possible d'obtenir une fibration contenant deux sphères d'auto-intersection -1 distinctes: la pré-image du point où est fait l'éclatement complexe usuel, et la transformation propre de la fibre. Ces dernières sont dites exceptionnelles, et donc il est possible de procéder à l'inverse de l'éclatement - la contraction - sur chacune d'elles. En l'accomplissant sur la deuxième, nous obtenons une variété minimale, et en combinant les informations sur les aires symplectiques de ses classes d'homologies et de celles de la variété originale nous obtenons le résultat.
Resumo:
This thesis addresses the problem of developing automatic grasping capabilities for robotic hands. Using a 2-jointed and a 4-jointed nmodel of the hand, we establish the geometric conditions necessary for achieving form closure grasps of cylindrical objects. We then define and show how to construct the grasping pre-image for quasi-static (friction dominated) and zero-G (inertia dominated) motions for sensorless and sensor-driven grasps with and without arm motions. While the approach does not rely on detailed modeling, it is computationally inexpensive, reliable, and easy to implement. Example behaviors were successfully implemented on the Salisbury hand and on a planar 2-fingered, 4 degree-of-freedom hand.
Resumo:
BACKGROUND Clinical prognostic groupings for localised prostate cancers are imprecise, with 30-50% of patients recurring after image-guided radiotherapy or radical prostatectomy. We aimed to test combined genomic and microenvironmental indices in prostate cancer to improve risk stratification and complement clinical prognostic factors. METHODS We used DNA-based indices alone or in combination with intra-prostatic hypoxia measurements to develop four prognostic indices in 126 low-risk to intermediate-risk patients (Toronto cohort) who will receive image-guided radiotherapy. We validated these indices in two independent cohorts of 154 (Memorial Sloan Kettering Cancer Center cohort [MSKCC] cohort) and 117 (Cambridge cohort) radical prostatectomy specimens from low-risk to high-risk patients. We applied unsupervised and supervised machine learning techniques to the copy-number profiles of 126 pre-image-guided radiotherapy diagnostic biopsies to develop prognostic signatures. Our primary endpoint was the development of a set of prognostic measures capable of stratifying patients for risk of biochemical relapse 5 years after primary treatment. FINDINGS Biochemical relapse was associated with indices of tumour hypoxia, genomic instability, and genomic subtypes based on multivariate analyses. We identified four genomic subtypes for prostate cancer, which had different 5-year biochemical relapse-free survival. Genomic instability is prognostic for relapse in both image-guided radiotherapy (multivariate analysis hazard ratio [HR] 4·5 [95% CI 2·1-9·8]; p=0·00013; area under the receiver operator curve [AUC] 0·70 [95% CI 0·65-0·76]) and radical prostatectomy (4·0 [1·6-9·7]; p=0·0024; AUC 0·57 [0·52-0·61]) patients with prostate cancer, and its effect is magnified by intratumoral hypoxia (3·8 [1·2-12]; p=0·019; AUC 0·67 [0·61-0·73]). A novel 100-loci DNA signature accurately classified treatment outcome in the MSKCC low-risk to intermediate-risk cohort (multivariate analysis HR 6·1 [95% CI 2·0-19]; p=0·0015; AUC 0·74 [95% CI 0·65-0·83]). In the independent MSKCC and Cambridge cohorts, this signature identified low-risk to high-risk patients who were most likely to fail treatment within 18 months (combined cohorts multivariate analysis HR 2·9 [95% CI 1·4-6·0]; p=0·0039; AUC 0·68 [95% CI 0·63-0·73]), and was better at predicting biochemical relapse than 23 previously published RNA signatures. INTERPRETATION This is the first study of cancer outcome to integrate DNA-based and microenvironment-based failure indices to predict patient outcome. Patients exhibiting these aggressive features after biopsy should be entered into treatment intensification trials. FUNDING Movember Foundation, Prostate Cancer Canada, Ontario Institute for Cancer Research, Canadian Institute for Health Research, NIHR Cambridge Biomedical Research Centre, The University of Cambridge, Cancer Research UK, Cambridge Cancer Charity, Prostate Cancer UK, Hutchison Whampoa Limited, Terry Fox Research Institute, Princess Margaret Cancer Centre Foundation, PMH-Radiation Medicine Program Academic Enrichment Fund, Motorcycle Ride for Dad (Durham), Canadian Cancer Society.
Resumo:
Most of the existing algorithms for approximate Bayesian computation (ABC) assume that it is feasible to simulate pseudo-data from the model at each iteration. However, the computational cost of these simulations can be prohibitive for high dimensional data. An important example is the Potts model, which is commonly used in image analysis. Images encountered in real world applications can have millions of pixels, therefore scalability is a major concern. We apply ABC with a synthetic likelihood to the hidden Potts model with additive Gaussian noise. Using a pre-processing step, we fit a binding function to model the relationship between the model parameters and the synthetic likelihood parameters. Our numerical experiments demonstrate that the precomputed binding function dramatically improves the scalability of ABC, reducing the average runtime required for model fitting from 71 hours to only 7 minutes. We also illustrate the method by estimating the smoothing parameter for remotely sensed satellite imagery. Without precomputation, Bayesian inference is impractical for datasets of that scale.
Resumo:
We present a technique for irreversible watermarking approach robust to affine transform attacks in camera, biomedical and satellite images stored in the form of monochrome bitmap images. The watermarking approach is based on image normalisation in which both watermark embedding and extraction are carried out with respect to an image normalised to meet a set of predefined moment criteria. The normalisation procedure is invariant to affine transform attacks. The result of watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. Here, direct-sequence code division multiple access approach is used to embed multibit text information in DCT and DWT transform domains. The proposed watermarking schemes are robust against various types of attacks such as Gaussian noise, shearing, scaling, rotation, flipping, affine transform, signal processing and JPEG compression. Performance analysis results are measured using image processing metrics.
Resumo:
Intraoperative assessment of surgical margins is critical to ensuring residual tumor does not remain in a patient. Previously, we developed a fluorescence structured illumination microscope (SIM) system with a single-shot field of view (FOV) of 2.1 × 1.6 mm (3.4 mm2) and sub-cellular resolution (4.4 μm). The goal of this study was to test the utility of this technology for the detection of residual disease in a genetically engineered mouse model of sarcoma. Primary soft tissue sarcomas were generated in the hindlimb and after the tumor was surgically removed, the relevant margin was stained with acridine orange (AO), a vital stain that brightly stains cell nuclei and fibrous tissues. The tissues were imaged with the SIM system with the primary goal of visualizing fluorescent features from tumor nuclei. Given the heterogeneity of the background tissue (presence of adipose tissue and muscle), an algorithm known as maximally stable extremal regions (MSER) was optimized and applied to the images to specifically segment nuclear features. A logistic regression model was used to classify a tissue site as positive or negative by calculating area fraction and shape of the segmented features that were present and the resulting receiver operator curve (ROC) was generated by varying the probability threshold. Based on the ROC curves, the model was able to classify tumor and normal tissue with 77% sensitivity and 81% specificity (Youden's index). For an unbiased measure of the model performance, it was applied to a separate validation dataset that resulted in 73% sensitivity and 80% specificity. When this approach was applied to representative whole margins, for a tumor probability threshold of 50%, only 1.2% of all regions from the negative margin exceeded this threshold, while over 14.8% of all regions from the positive margin exceeded this threshold.
Resumo:
The study was carried out at the UNESP Rio Claro campus (SP), where biotests consisting of simulated ant attacks were performed in colonies of Mischocyttarus cerberus. The behaviors of the wasps were recorded with a camcorder, for further analysis. This analysis was done using the Mann-Whitney U test and the Principal Component Analysis. In the pre-emergence development stage, colonies with a single foundress defend the nest only after the first larvae appear. When there are only eggs in the nest, the wasp abandons the nest. Before leaving, the wasp rubs its gaster against the nest, releasing the ant repellent secretion. When the nest contains larvae or larvae and pupae, the foundress defends the colony, vibrating its wings, pumping her abdomen and biting the ant.
Resumo:
In this work an image pre-processing module has been developed to extract quantitative information from plantation images with various degrees of infestation. Four filters comprise this module: the first one acts on smoothness of the image, the second one removes image background enhancing plants leaves, the third filter removes isolated dots not removed by the previous filter, and the fourth one is used to highlight leaves' edges. At first the filters were tested with MATLAB, for a quick visual feedback of the filters' behavior. Then the filters were implemented in the C programming language. At last, the module as been coded in VHDL for the implementation on a Stratix II family FPGA. Tests were run and the results are shown in this paper. © 2008 Springer-Verlag Berlin Heidelberg.