759 resultados para Calabi-Yau manifold
Resumo:
We performed an integrated genomic, transcriptomic and proteomic characterization of 373 endometrial carcinomas using array- and sequencing-based technologies. Uterine serous tumours and ∼25% of high-grade endometrioid tumours had extensive copy number alterations, few DNA methylation changes, low oestrogen receptor/progesterone receptor levels, and frequent TP53 mutations. Most endometrioid tumours had few copy number alterations or TP53 mutations, but frequent mutations in PTEN, CTNNB1, PIK3CA, ARID1A and KRAS and novel mutations in the SWI/SNF chromatin remodelling complex gene ARID5B. A subset of endometrioid tumours that we identified had a markedly increased transversion mutation frequency and newly identified hotspot mutations in POLE. Our results classified endometrial cancers into four categories: POLE ultramutated, microsatellite instability hypermutated, copy-number low, and copy-number high. Uterine serous carcinomas share genomic features with ovarian serous and basal-like breast carcinomas. We demonstrated that the genomic features of endometrial carcinomas permit a reclassification that may affect post-surgical adjuvant treatment for women with aggressive tumours.
Resumo:
Denial-of-service (DoS) attacks are a growing concern to networked services like the Internet. In recent years, major Internet e-commerce and government sites have been disabled due to various DoS attacks. A common form of DoS attack is a resource depletion attack, in which an attacker tries to overload the server's resources, such as memory or computational power, rendering the server unable to service honest clients. A promising way to deal with this problem is for a defending server to identify and segregate malicious traffic as earlier as possible. Client puzzles, also known as proofs of work, have been shown to be a promising tool to thwart DoS attacks in network protocols, particularly in authentication protocols. In this thesis, we design efficient client puzzles and propose a stronger security model to analyse client puzzles. We revisit a few key establishment protocols to analyse their DoS resilient properties and strengthen them using existing and novel techniques. Our contributions in the thesis are manifold. We propose an efficient client puzzle that enjoys its security in the standard model under new computational assumptions. Assuming the presence of powerful DoS attackers, we find a weakness in the most recent security model proposed to analyse client puzzles and this study leads us to introduce a better security model for analysing client puzzles. We demonstrate the utility of our new security definitions by including two hash based stronger client puzzles. We also show that using stronger client puzzles any protocol can be converted into a provably secure DoS resilient key exchange protocol. In other contributions, we analyse DoS resilient properties of network protocols such as Just Fast Keying (JFK) and Transport Layer Security (TLS). In the JFK protocol, we identify a new DoS attack by applying Meadows' cost based framework to analyse DoS resilient properties. We also prove that the original security claim of JFK does not hold. Then we combine an existing technique to reduce the server cost and prove that the new variant of JFK achieves perfect forward secrecy (the property not achieved by original JFK protocol) and secure under the original security assumptions of JFK. Finally, we introduce a novel cost shifting technique which reduces the computation cost of the server significantly and employ the technique in the most important network protocol, TLS, to analyse the security of the resultant protocol. We also observe that the cost shifting technique can be incorporated in any Diffine{Hellman based key exchange protocol to reduce the Diffie{Hellman exponential cost of a party by one multiplication and one addition.
Resumo:
Whether to keep products segregated (e.g., unbundled) or integrate some or all of them (e.g., bundle) has been a problem of profound interest in areas such as portfolio theory in finance, risk capital allocations in insurance and marketing of consumer products. Such decisions are inherently complex and depend on factors such as the underlying product values and consumer preferences, the latter being frequently described using value functions, also known as utility functions in economics. In this paper, we develop decision rules for multiple products, which we generally call ‘exposure units’ to naturally cover manifold scenarios spanning well beyond ‘products’. Our findings show, e.g. that the celebrated Thaler's principles of mental accounting hold as originally postulated when the values of all exposure units are positive (i.e. all are gains) or all negative (i.e. all are losses). In the case of exposure units with mixed-sign values, decision rules are much more complex and rely on cataloging the Bell number of cases that grow very fast depending on the number of exposure units. Consequently, in the present paper, we provide detailed rules for the integration and segregation decisions in the case up to three exposure units, and partial rules for the arbitrary number of units.
Resumo:
The first fiber Bragg grating (FBG) accelerometer using direct transverse forces is demonstrated by fixing the FBG by its two ends and placing a transversely moving inertial object at its middle. It is very sensitive because a lightly stretched FBG is more sensitive to transverse forces than axial forces. Its resonant frequency and static sensitivity are analyzed by the classic spring-mass theory, assuming the axial force changes little. The experiments show that the theory can be modified for cases where the assumption does not hold. The resonant frequency can be modified by a linear relationship experimentally achieved, and the static sensitivity by an alternative method proposed. The principles of the over-range protection and low cross axial sensitivity are achieved by limiting the movement of the FBG and were validated experimentally. The sensitivities 1.333 and 0.634 nm/g were experimentally achieved by 5.29 and 2.83 gram inertial objects at 10 Hz from 0.1 to 0.4 g (g = 9.8 m/s 2), respectively, and their resonant frequencies were around 25 Hz. Their theoretical static sensitivities and resonant frequencies found by the modifications are 1.188 nm/g and 26.81 Hz for the 5.29 gram one and 0.784 nm/g and 29.04 Hz for the 2.83 gram one, respectively.
Resumo:
In many bridges, vertical displacements are one of the most relevant parameters for structural health monitoring in both the short- and long-terms. Bridge managers around the globe are always looking for a simple way to measure vertical displacements of bridges. However, it is difficult to carry out such measurements. On the other hand, in recent years, with the advancement of fibre-optic technologies, fibre Bragg grating (FBG) sensors are more commonly used in structural health monitoring due to their outstanding advantages including multiplexing capability, immunity of electromagnetic interference as well as high resolution and accuracy. For these reasons, a methodology for measuring the vertical displacements of bridges using FBG sensors is proposed. The methodology includes two approaches. One of which is based on curvature measurements while the other utilises inclination measurements from successfully developed FBG tilt sensors. A series of simulation tests of a full-scale bridge was conducted. It shows that both approaches can be implemented to measure the vertical displacements for bridges with various support conditions, varying stiffness along the spans and without any prior known loading. A static loading beam test with increasing loads at the mid-span and a beam test with different loading locations were conducted to measure vertical displacements using FBG strain sensors and tilt sensors. The results show that the approaches can successfully measure vertical displacements.
Resumo:
A major challenge for robot localization and mapping systems is maintaining reliable operation in a changing environment. Vision-based systems in particular are susceptible to changes in illumination and weather, and the same location at another time of day may appear radically different to a system using a feature-based visual localization system. One approach for mapping changing environments is to create and maintain maps that contain multiple representations of each physical location in a topological framework or manifold. However, this requires the system to be able to correctly link two or more appearance representations to the same spatial location, even though the representations may appear quite dissimilar. This paper proposes a method of linking visual representations from the same location without requiring a visual match, thereby allowing vision-based localization systems to create multiple appearance representations of physical locations. The most likely position on the robot path is determined using particle filter methods based on dead reckoning data and recent visual loop closures. In order to avoid erroneous loop closures, the odometry-based inferences are only accepted when the inferred path's end point is confirmed as correct by the visual matching system. Algorithm performance is demonstrated using an indoor robot dataset and a large outdoor camera dataset.
Resumo:
In this article, we analyse bifurcations from stationary stable spots to travelling spots in a planar three-component FitzHugh-Nagumo system that was proposed previously as a phenomenological model of gas-discharge systems. By combining formal analyses, center-manifold reductions, and detailed numerical continuation studies, we show that, in the parameter regime under consideration, the stationary spot destabilizes either through its zeroth Fourier mode in a Hopf bifurcation or through its first Fourier mode in a pitchfork or drift bifurcation, whilst the remaining Fourier modes appear to create only secondary bifurcations. Pitchfork bifurcations result in travelling spots, and we derive criteria for the criticality of these bifurcations. Our main finding is that supercritical drift bifurcations, leading to stable travelling spots, arise in this model, which does not seem possible for its two-component version.
Resumo:
A fiber Bragg grating (FBG) accelerometer using transverse forces is more sensitive than one using axial forces with the same mass of the inertial object, because a barely stretched FBG fixed at its two ends is much more sensitive to transverse forces than axial ones. The spring-mass theory, with the assumption that the axial force changes little during the vibration, cannot accurately predict its sensitivity and resonant frequency in the gravitational direction because the assumption does not hold due to the fact that the FBG is barely prestretched. It was modified but still required experimental verification due to the limitations in the original experiments, such as the (1) friction between the inertial object and shell; (2) errors involved in estimating the time-domain records; (3) limited data; and (4) large interval ∼5 Hz between the tested frequencies in the frequency-response experiments. The experiments presented here have verified the modified theory by overcoming those limitations. On the frequency responses, it is observed that the optimal condition for simultaneously achieving high sensitivity and resonant frequency is at the infinitesimal prestretch. On the sensitivity at the same frequency, the experimental sensitivities of the FBG accelerometer with a 5.71 gram inertial object at 6 Hz (1.29, 1.19, 0.88, 0.64, and 0.31 nm/g at the 0.03, 0.69, 1.41, 1.93, and 3.16 nm prestretches, respectively) agree with the static sensitivities predicted (1.25, 1.14, 0.83, 0.61, and 0.29 nm/g, correspondingly). On the resonant frequency, (1) its assumption that the resonant frequencies in the forced and free vibrations are similar is experimentally verified; (2) its dependence on the distance between the FBG’s fixed ends is examined, showing it to be independent; (3) the predictions of the spring-mass theory and modified theory are compared with the experimental results, showing that the modified theory predicts more accurately. The modified theory can be used more confidently in guiding its design by predicting its static sensitivity and resonant frequency, and may have applications in other fields for the scenario where the spring-mass theory fails.
Resumo:
Research on journalists’ characteristics, values, attitudes and role perceptions has expanded manifold since the first large-scale survey in the United States in the 1970s. Scholars around the world have investigated the work practices of a large variety of journalists, to the extent that we now have a sizeable body of evidence in this regard. Comparative research across cultures, however, has only recently begun to gain ground, with scholars interested in concepts of journalism culture in an age of globalisation. As part of a wider, cross-cultural effort, this study reports the results of a survey of 100 Australian journalists in order to paint a picture of the way journalists see their role in society. Such a study is important due to the relative absence of large-scale surveys of Australian journalists since Henningham’s (1993) seminal work. This paper reports some important trends in the Australian news media since the early 1990s, with improvements in gender balance and journalists now being older, better educated, and holding more leftist political views. In locating Australian journalism culture within the study’s framework, some long-held assumptions are reinforced, with journalists following traditional values of objectivity, passive reporting and the ideal of the fourth estate.
Resumo:
Analysis of behavioural consistency is an important aspect of software engineering. In process and service management, consistency verification of behavioural models has manifold applications. For instance, a business process model used as system specification and a corresponding workflow model used as implementation have to be consistent. Another example would be the analysis to what degree a process log of executed business operations is consistent with the corresponding normative process model. Typically, existing notions of behaviour equivalence, such as bisimulation and trace equivalence, are applied as consistency notions. Still, these notions are exponential in computation and yield a Boolean result. In many cases, however, a quantification of behavioural deviation is needed along with concepts to isolate the source of deviation. In this article, we propose causal behavioural profiles as the basis for a consistency notion. These profiles capture essential behavioural information, such as order, exclusiveness, and causality between pairs of activities of a process model. Consistency based on these profiles is weaker than trace equivalence, but can be computed efficiently for a broad class of models. In this article, we introduce techniques for the computation of causal behavioural profiles using structural decomposition techniques for sound free-choice workflow systems if unstructured net fragments are acyclic or can be traced back to S- or T-nets. We also elaborate on the findings of applying our technique to three industry model collections.
Resumo:
This paper reviews some recent results in motion control of marine vehicles using a technique called Interconnection and Damping Assignment Passivity-based Control (IDA-PBC). This approach to motion control exploits the fact that vehicle dynamics can be described in terms of energy storage, distribution, and dissipation, and that the stable equilibrium points of mechanical systems are those at which the potential energy attains a minima. The control forces are used to transform the closed-loop dynamics into a port-controlled Hamiltonian system with dissipation. This is achieved by shaping the energy-storing characteristics of the system, modifying its interconnection structure (how the energy is distributed), and injecting damping. The end result is that the closed-loop system presents a stable equilibrium (hopefully global) at the desired operating point. By forcing the closed-loop dynamics into a Hamiltonian form, the resulting total energy function of the system serves as a Lyapunov function that can be used to demonstrate stability. We consider the tracking and regulation of fully actuated unmanned underwater vehicles, its extension to under-actuated slender vehicles, and also manifold regulation of under-actuated surface vessels. The paper is concluded with an outlook on future research.
Resumo:
Recent advances suggest that encoding images through Symmetric Positive Definite (SPD) matrices and then interpreting such matrices as points on Riemannian manifolds can lead to increased classification performance. Taking into account manifold geometry is typically done via (1) embedding the manifolds in tangent spaces, or (2) embedding into Reproducing Kernel Hilbert Spaces (RKHS). While embedding into tangent spaces allows the use of existing Euclidean-based learning algorithms, manifold shape is only approximated which can cause loss of discriminatory information. The RKHS approach retains more of the manifold structure, but may require non-trivial effort to kernelise Euclidean-based learning algorithms. In contrast to the above approaches, in this paper we offer a novel solution that allows SPD matrices to be used with unmodified Euclidean-based learning algorithms, with the true manifold shape well-preserved. Specifically, we propose to project SPD matrices using a set of random projection hyperplanes over RKHS into a random projection space, which leads to representing each matrix as a vector of projection coefficients. Experiments on face recognition, person re-identification and texture classification show that the proposed approach outperforms several recent methods, such as Tensor Sparse Coding, Histogram Plus Epitome, Riemannian Locality Preserving Projection and Relational Divergence Classification.
Resumo:
Traditional nearest points methods use all the samples in an image set to construct a single convex or affine hull model for classification. However, strong artificial features and noisy data may be generated from combinations of training samples when significant intra-class variations and/or noise occur in the image set. Existing multi-model approaches extract local models by clustering each image set individually only once, with fixed clusters used for matching with various image sets. This may not be optimal for discrimination, as undesirable environmental conditions (eg. illumination and pose variations) may result in the two closest clusters representing different characteristics of an object (eg. frontal face being compared to non-frontal face). To address the above problem, we propose a novel approach to enhance nearest points based methods by integrating affine/convex hull classification with an adapted multi-model approach. We first extract multiple local convex hulls from a query image set via maximum margin clustering to diminish the artificial variations and constrain the noise in local convex hulls. We then propose adaptive reference clustering (ARC) to constrain the clustering of each gallery image set by forcing the clusters to have resemblance to the clusters in the query image set. By applying ARC, noisy clusters in the query set can be discarded. Experiments on Honda, MoBo and ETH-80 datasets show that the proposed method outperforms single model approaches and other recent techniques, such as Sparse Approximated Nearest Points, Mutual Subspace Method and Manifold Discriminant Analysis.
Resumo:
Person re-identification is particularly challenging due to significant appearance changes across separate camera views. In order to re-identify people, a representative human signature should effectively handle differences in illumination, pose and camera parameters. While general appearance-based methods are modelled in Euclidean spaces, it has been argued that some applications in image and video analysis are better modelled via non-Euclidean manifold geometry. To this end, recent approaches represent images as covariance matrices, and interpret such matrices as points on Riemannian manifolds. As direct classification on such manifolds can be difficult, in this paper we propose to represent each manifold point as a vector of similarities to class representers, via a recently introduced form of Bregman matrix divergence known as the Stein divergence. This is followed by using a discriminative mapping of similarity vectors for final classification. The use of similarity vectors is in contrast to the traditional approach of embedding manifolds into tangent spaces, which can suffer from representing the manifold structure inaccurately. Comparative evaluations on benchmark ETHZ and iLIDS datasets for the person re-identification task show that the proposed approach obtains better performance than recent techniques such as Histogram Plus Epitome, Partial Least Squares, and Symmetry-Driven Accumulation of Local Features.
Resumo:
Existing multi-model approaches for image set classification extract local models by clustering each image set individually only once, with fixed clusters used for matching with other image sets. However, this may result in the two closest clusters to represent different characteristics of an object, due to different undesirable environmental conditions (such as variations in illumination and pose). To address this problem, we propose to constrain the clustering of each query image set by forcing the clusters to have resemblance to the clusters in the gallery image sets. We first define a Frobenius norm distance between subspaces over Grassmann manifolds based on reconstruction error. We then extract local linear subspaces from a gallery image set via sparse representation. For each local linear subspace, we adaptively construct the corresponding closest subspace from the samples of a probe image set by joint sparse representation. We show that by minimising the sparse representation reconstruction error, we approach the nearest point on a Grassmann manifold. Experiments on Honda, ETH-80 and Cambridge-Gesture datasets show that the proposed method consistently outperforms several other recent techniques, such as Affine Hull based Image Set Distance (AHISD), Sparse Approximated Nearest Points (SANP) and Manifold Discriminant Analysis (MDA).