53 resultados para Combinatorial Grassmannian
Resumo:
This paper considers the problem of reconstructing the motion of a 3D articulated tree from 2D point correspondences subject to some temporal prior. Hitherto, smooth motion has been encouraged using a trajectory basis, yielding a hard combinatorial problem with time complexity growing exponentially in the number of frames. Branch and bound strategies have previously attempted to curb this complexity whilst maintaining global optimality. However, they provide no guarantee of being more efficient than exhaustive search. Inspired by recent work which reconstructs general trajectories using compact high-pass filters, we develop a dynamic programming approach which scales linearly in the number of frames, leveraging the intrinsically local nature of filter interactions. Extension to affine projection enables reconstruction without estimating cameras.
Resumo:
In the real world there are many problems in network of networks (NoNs) that can be abstracted to a so-called minimum interconnection cut problem, which is fundamentally different from those classical minimum cut problems in graph theory. Thus, it is desirable to propose an efficient and effective algorithm for the minimum interconnection cut problem. In this paper we formulate the problem in graph theory, transform it into a multi-objective and multi-constraint combinatorial optimization problem, and propose a hybrid genetic algorithm (HGA) for the problem. The HGA is a penalty-based genetic algorithm (GA) that incorporates an effective heuristic procedure to locally optimize the individuals in the population of the GA. The HGA has been implemented and evaluated by experiments. Experimental results have shown that the HGA is effective and efficient.
Resumo:
Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.
Resumo:
Cancer-associated proteases promote peritoneal dissemination and chemoresistance in malignant progression. In this study, kallikrein-related peptidases 4, 5, 6, and 7 (KLK4-7)-cotransfected OV-MZ-6 ovarian cancer cells were embedded in a bioengineered three-dimensional (3D) microenvironment that contains RGD motifs for integrin engagement to analyze their spheroid growth and survival after chemotreatment. KLK4-7-cotransfected cells formed larger spheroids and proliferated more than controls in 3D, particularly within RGD-functionalized matrices, which was reduced upon integrin inhibition. In contrast, KLK4-7-expressing cell monolayers proliferated less than controls, emphasizing the relevance of the 3D microenvironment and integrin engagement. In a spheroid-based animal model, KLK4-7-overexpression induced tumor growth after 4 weeks and intraperitoneal spread after 8 weeks. Upon paclitaxel administration, KLK4-7-expressing tumors declined in size by 91% (controls: 87%) and showed 90% less metastatic outgrowth (controls: 33%, P<0.001). KLK4-7-expressing spheroids showed 53% survival upon paclitaxel treatment (controls: 51%), accompanied by enhanced chemoresistance-related factors, and their survival was further reduced by combination treatment of paclitaxel with KLK4/5/7 (22%, P=0.007) or MAPK (6%, P=0.006) inhibition. The concomitant presence of KLK4-7 in ovarian cancer cells together with integrin activation drives spheroid formation and proliferation. Combinatorial approaches of paclitaxel and KLK/MAPK inhibition may be more efficient for late-stage disease than chemotherapeutics alone as these inhibitory regimens reduced cancer spheroid growth to a greater extent than paclitaxel alone.
Resumo:
Railway crew scheduling problem is the process of allocating train services to the crew duties based on the published train timetable while satisfying operational and contractual requirements. The problem is restricted by many constraints and it belongs to the class of NP-hard. In this paper, we develop a mathematical model for railway crew scheduling with the aim of minimising the number of crew duties by reducing idle transition times. Duties are generated by arranging scheduled trips over a set of duties and sequentially ordering the set of trips within each of duties. The optimisation model includes the time period of relief opportunities within which a train crew can be relieved at any relief point. Existing models and algorithms usually only consider relieving a crew at the beginning of the interval of relief opportunities which may be impractical. This model involves a large number of decision variables and constraints, and therefore a hybrid constructive heuristic with the simulated annealing search algorithm is applied to yield an optimal or near-optimal schedule. The performance of the proposed algorithms is evaluated by applying computational experiments on randomly generated test instances. The results show that the proposed approaches obtain near-optimal solutions in a reasonable computational time for large-sized problems.
Resumo:
Addressing the Crew Scheduling Problem (CSP) in transportation systems can be too complex to capture all details. The designed models usually ignore or simplify features which are difficult to formulate. This paper proposes an alternative formulation using a Mixed Integer Programming (MIP) approach to the problem. The optimisation model integrates the two phases of pairing generation and pairing optimisation by simultaneously sequencing trips into feasible duties and minimising total elapsed time of any duty. Crew scheduling constraints in which the crew have to return to their home depot at the end of the shift are included in the model. The flexibility of this model comes in the inclusion of the time interval of relief opportunities, allowing the crew to be relieved during a finite time interval. This will enhance the robustness of the schedule and provide a better representation of real-world conditions.
Resumo:
We consider the following problem: members in a dynamic group retrieve their encrypted data from an untrusted server based on keywords and without any loss of data confidentiality and member’s privacy. In this paper, we investigate common secure indices for conjunctive keyword-based retrieval over encrypted data, and construct an efficient scheme from Wang et al. dynamic accumulator, Nyberg combinatorial accumulator and Kiayias et al. public-key encryption system. The proposed scheme is trapdoorless and keyword-field free. The security is proved under the random oracle, decisional composite residuosity and extended strong RSA assumptions.
Resumo:
Replacement of endogenous genes by homologous recombination is rare in plants; the majority of genetic modifications are the result of transforming DNA molecules undergoing random genomic insertion by way of non-homologous recombination. Factors that affect chromatin remodeling and DNA repair are thought to have the potential to enhance the frequency of homologous recombination in plants. Conventional tools to study the frequencies of genetic recombination often rely on stable transformation-based approaches, with these systems being rarely capable of high-throughput or combinatorial analysis. We developed a series of vectors that use chemiluminescent (LUC and REN) reporter genes to assay the relative frequency of homologous and non-homologous recombination in plants. These transient assay vectors were used to screen 14 candidategenes for their effects on recombination frequencies in Nicotiana benthamiana plants. Over-expression of Arabidopsis genes with sequence similarity to SNM1 from yeast and XRCC3 from humans enhanced the frequency of non-homologous recombination when assayed using two different donor vectors. Transient N. benthamiana leaf systems were also used in an alternative assay for preliminary measurements of homologous recombination frequencies, which were found to be enhanced by over-expression of RAD52, MIM and RAD51 from yeast, as well as CHR24 from Arabidopsis. The findings for the assays described here are in line with previous studies that analyzed recombination frequencies using stable transformation. The assays we report have revealed functions in non-homologous recombination for the Arabidopsis SNM1 and XRCC3 genes, so the suppression of these genes' expression offers a potential means to enhance the gene targeting frequency in plants. Furthermore, our findings also indicate that plant gene targeting frequencies could be enhanced by over-expression of RAD52, MIM, CHR24, and RAD51 genes.
Resumo:
We first classify the state-of-the-art stream authentication problem in the multicast environment and group them into Signing and MAC approaches. A new approach for authenticating digital streams using Threshold Techniques is introduced. The new approach main advantages are in tolerating packet loss, up to a threshold number, and having a minimum space overhead. It is most suitable for multicast applications running over lossy, unreliable communication channels while, in same time, are pertain the security requirements. We use linear equations based on Lagrange polynomial interpolation and Combinatorial Design methods.
Resumo:
In a traditional anti-jamming system a transmitter who wants to send a signal to a single receiver spreads the signal power over a wide frequency spectrum with the aim of stopping a jammer from blocking the transmission. In this paper, we consider the case that there are multiple receivers and the transmitter wants to broadcast a message to all receivers such that colluding groups of receivers cannot jam the reception of any other receiver. We propose efficient coding methods that achieve this goal and link this problem to well-known problems in combinatorics. We also link a generalisation of this problem to the Key Distribution Pattern problem studied in combinatorial cryptography.
Resumo:
One-time proxy signatures are one-time signatures for which a primary signer can delegate his or her signing capability to a proxy signer. In this work we propose two one-time proxy signature schemes with different security properties. Unlike other existing one-time proxy signatures that are constructed from public key cryptography, our proposed schemes are based one-way functions without trapdoors and so they inherit the communication and computation efficiency from the traditional one-time signatures. Although from a verifier point of view, signatures generated by the proxy are indistinguishable from those created by the primary signer, a trusted authority can be equipped with an algorithm that allows the authority to settle disputes between the signers. In our constructions, we use a combination of one-time signatures, oblivious transfer protocols and certain combinatorial objects. We characterise these new combinatorial objects and present constructions for them.
Resumo:
Through a combinatorial approach involving experimental measurement and plasma modelling, it is shown that a high degree of control over diamond-like nanocarbon film sp3/sp2 ratio (and hence film properties) may be exercised, starting at the level of electrons (through modification of the plasma electron energy distribution function). Hydrogenated amorphous carbon nanoparticle films with high percentages of diamond-like bonds are grown using a middle-frequency (2 MHz) inductively coupled Ar + CH4 plasma. The sp3 fractions measured by X-ray photoelectron spectroscopy (XPS) and Raman spectroscopy in the thin films are explained qualitatively using sp3/sp2 ratios 1) derived from calculated sp3 and sp2 hybridized precursor species densities in a global plasma discharge model and 2) measured experimentally. It is shown that at high discharge power and lower CH4 concentrations, the sp3/sp2 fraction is higher. Our results suggest that a combination of predictive modeling and experimental studies is instrumental to achieve deterministically grown made-to-order diamond-like nanocarbons suitable for a variety of applications spanning from nano-magnetic resonance imaging to spin-flip quantum information devices. This deterministic approach can be extended to graphene, carbon nanotips, nanodiamond and other nanocarbon materials for a variety of applications
Resumo:
Cancer can be defined as a deregulation or hyperactivity in the ongoing network of intracellular and extracellular signaling events. Reverse phase protein microarray technology may offer a new opportunity to measure and profile these signaling pathways, providing data on post-translational phosphorylation events not obtainable by gene microarray analysis. Treatment of ovarian epithelial carcinoma almost always takes place in a metastatic setting since unfortunately the disease is often not detected until later stages. Thus, in addition to elucidation of the molecular network within a tumor specimen, critical questions are to what extent do signaling changes occur upon metastasis and are there common pathway elements that arise in the metastatic microenvironment. For individualized combinatorial therapy, ideal therapeutic selection based on proteomic mapping of phosphorylation end points may require evaluation of the patient's metastatic tissue. Extending these findings to the bedside will require the development of optimized protocols and reference standards. We have developed a reference standard based on a mixture of phosphorylated peptides to begin to address this challenge.
Resumo:
We study the natural problem of secure n-party computation (in the passive, computationally unbounded attack model) of the n-product function f G (x 1,...,x n ) = x 1 ·x 2 ⋯ x n in an arbitrary finite group (G,·), where the input of party P i is x i ∈ G for i = 1,...,n. For flexibility, we are interested in protocols for f G which require only black-box access to the group G (i.e. the only computations performed by players in the protocol are a group operation, a group inverse, or sampling a uniformly random group element). Our results are as follows. First, on the negative side, we show that if (G,·) is non-abelian and n ≥ 4, then no ⌈n/2⌉-private protocol for computing f G exists. Second, on the positive side, we initiate an approach for construction of black-box protocols for f G based on k-of-k threshold secret sharing schemes, which are efficiently implementable over any black-box group G. We reduce the problem of constructing such protocols to a combinatorial colouring problem in planar graphs. We then give two constructions for such graph colourings. Our first colouring construction gives a protocol with optimal collusion resistance t < n/2, but has exponential communication complexity O(n*2t+1^2/t) group elements (this construction easily extends to general adversary structures). Our second probabilistic colouring construction gives a protocol with (close to optimal) collusion resistance t < n/μ for a graph-related constant μ ≤ 2.948, and has efficient communication complexity O(n*t^2) group elements. Furthermore, we believe that our results can be improved by further study of the associated combinatorial problems.
Resumo:
The power of sharing computation in a cryptosystem is crucial in several real-life applications of cryptography. Cryptographic primitives and tasks to which threshold cryptosystems have been applied include variants of digital signature, identification, public-key encryption and block ciphers etc. It is desirable to extend the domain of cryptographic primitives which threshold cryptography can be applied to. This paper studies threshold message authentication codes (threshold MACs). Threshold cryptosystems usually use algebraically homomorphic properties of the underlying cryptographic primitives. A typical approach to construct a threshold cryptographic scheme is to combine a (linear) secret sharing scheme with an algebraically homomorphic cryptographic primitive. The lack of algebraic properties of MACs rules out such an approach to share MACs. In this paper, we propose a method of obtaining a threshold MAC using a combinatorial approach. Our method is generic in the sense that it is applicable to any secure conventional MAC by making use of certain combinatorial objects, such as cover-free families and their variants. We discuss the issues of anonymity in threshold cryptography, a subject that has not been addressed previously in the literature in the field, and we show that there are trade-offis between the anonymity and efficiency of threshold MACs.