196 resultados para yoking proof
Resumo:
Many cell types form clumps or aggregates when cultured in vitro through a variety of mechanisms including rapid cell proliferation, chemotaxis, or direct cell-to-cell contact. In this paper we develop an agent-based model to explore the formation of aggregates in cultures where cells are initially distributed uniformly, at random, on a two-dimensional substrate. Our model includes unbiased random cell motion, together with two mechanisms which can produce cell aggregates: (i) rapid cell proliferation, and (ii) a biased cell motility mechanism where cells can sense other cells within a finite range, and will tend to move towards areas with higher numbers of cells. We then introduce a pair-correlation function which allows us to quantify aspects of the spatial patterns produced by our agent-based model. In particular, these pair-correlation functions are able to detect differences between domains populated uniformly at random (i.e. at the exclusion complete spatial randomness (ECSR) state) and those where the proliferation and biased motion rules have been employed - even when such differences are not obvious to the naked eye. The pair-correlation function can also detect the emergence of a characteristic inter-aggregate distance which occurs when the biased motion mechanism is dominant, and is not observed when cell proliferation is the main mechanism of aggregate formation. This suggests that applying the pair-correlation function to experimental images of cell aggregates may provide information about the mechanism associated with observed aggregates. As a proof of concept, we perform such analysis for images of cancer cell aggregates, which are known to be associated with rapid proliferation. The results of our analysis are consistent with the predictions of the proliferation-based simulations, which supports the potential usefulness of pair correlation functions for providing insight into the mechanisms of aggregate formation.
Resumo:
The acceptance of broadband ultrasound attenuation for the assessment of osteoporosis suffers from a limited understanding of ultrasound wave propagation through cancellous bone. It has recently been proposed that the ultrasound wave propagation can be described by a concept of parallel sonic rays. This concept approximates the detected transmission signal to be the superposition of all sonic rays that travel directly from transmitting to receiving transducer. The transit time of each ray is defined by the proportion of bone and marrow propagated. An ultrasound transit time spectrum describes the proportion of sonic rays having a particular transit time, effectively describing lateral inhomogeneity of transit times over the surface of the receiving ultrasound transducer. The aim of this study was to provide a proof of concept that a transit time spectrum may be derived from digital deconvolution of input and output ultrasound signals. We have applied the active-set method deconvolution algorithm to determine the ultrasound transit time spectra in the three orthogonal directions of four cancellous bone replica samples and have compared experimental data with the prediction from the computer simulation. The agreement between experimental and predicted ultrasound transit time spectrum analyses derived from Bland–Altman analysis ranged from 92% to 99%, thereby supporting the concept of parallel sonic rays for ultrasound propagation in cancellous bone. In addition to further validation of the parallel sonic ray concept, this technique offers the opportunity to consider quantitative characterisation of the material and structural properties of cancellous bone, not previously available utilising ultrasound.
Resumo:
A SINGLE document was all it took to illuminate a dark secret in the Church of England. The two-page child protection report, unearthed by police in the archives of the diocese of Manchester, was proof, at last, that a former cathedral choirboy -- alleging years of sexual abuse by one of Britain's most senior clergyman -- was not alone. There was another boy. Also a solo soprano, on the other side of the world, who was singing from the same hymn sheet about The Very Reverend Robert Waddington. "There had been a previous referral about sexual impropriety some time ago from Australia, where RW had been the headmaster at a school. An ex-pupil had made a complaint to the Bishop of (north) Queensland who had relayed it to the Archbishop (of York)," the 2003 report says. Eli Ward's family had prompted the secret report when they told church officials, without Ward's knowledge, of the alleged abuse he suffered in the mid-1980s.
Resumo:
This paper describes the development of a novel vision-based autonomous surface vehicle with the purpose of performing coordinated docking manoeuvres with a target, such as an autonomous underwater vehicle, at the water's surface. The system architecture integrates two small processor units; the first performs vehicle control and implements a virtual force based docking strategy, with the second performing vision-based target segmentation and tracking. Furthermore, the architecture utilises wireless sensor network technology allowing the vehicle to be observed by, and even integrated within an ad-hoc sensor network. Simulated and experimental results are presented demonstrating the autonomous vision- based docking strategy on a proof-of-concept vehicle.
Resumo:
We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors. One trapdoor enables the real system to generate short vectors in all lattices in the family. The other trapdoor enables the simulator to generate short vectors for all lattices in the family except for one. We extend this basic technique to an adaptively-secure IBE and a Hierarchical IBE.
Resumo:
An encryption scheme is non-malleable if giving an encryption of a message to an adversary does not increase its chances of producing an encryption of a related message (under a given public key). Fischlin introduced a stronger notion, known as complete non-malleability, which requires attackers to have negligible advantage, even if they are allowed to transform the public key under which the related message is encrypted. Ventre and Visconti later proposed a comparison-based definition of this security notion, which is more in line with the well-studied definitions proposed by Bellare et al. The authors also provide additional feasibility results by proposing two constructions of completely non-malleable schemes, one in the common reference string model using non-interactive zero-knowledge proofs, and another using interactive encryption schemes. Therefore, the only previously known completely non-malleable (and non-interactive) scheme in the standard model, is quite inefficient as it relies on generic NIZK approach. They left the existence of efficient schemes in the common reference string model as an open problem. Recently, two efficient public-key encryption schemes have been proposed by Libert and Yung, and Barbosa and Farshim, both of them are based on pairing identity-based encryption. At ACISP 2011, Sepahi et al. proposed a method to achieve completely non-malleable encryption in the public-key setting using lattices but there is no security proof for the proposed scheme. In this paper we review the mentioned scheme and provide its security proof in the standard model. Our study shows that Sepahi’s scheme will remain secure even for post-quantum world since there are currently no known quantum algorithms for solving lattice problems that perform significantly better than the best known classical (i.e., non-quantum) algorithms.
Resumo:
Numeric set watermarking is a way to provide ownership proof for numerical data. Numerical data can be considered to be primitives for multimedia types such as images and videos since they are organized forms of numeric information. Thereby, the capability to watermark numerical data directly implies the capability to watermark multimedia objects and discourage information theft on social networking sites and the Internet in general. Unfortunately, there has been very limited research done in the field of numeric set watermarking due to underlying limitations in terms of number of items in the set and LSBs in each item available for watermarking. In 2009, Gupta et al. proposed a numeric set watermarking model that embeds watermark bits in the items of the set based on a hash value of the items’ most significant bits (MSBs). If an item is chosen for watermarking, a watermark bit is embedded in the least significant bits, and the replaced bit is inserted in the fractional value to provide reversibility. The authors show their scheme to be resilient against the traditional subset addition, deletion, and modification attacks as well as secondary watermarking attacks. In this paper, we present a bucket attack on this watermarking model. The attack consists of creating buckets of items with the same MSBs and determine if the items of the bucket carry watermark bits. Experimental results show that the bucket attack is very strong and destroys the entire watermark with close to 100% success rate. We examine the inherent weaknesses in the watermarking model of Gupta et al. that leave it vulnerable to the bucket attack and propose potential safeguards that can provide resilience against this attack.
Resumo:
We consider the problem of increasing the threshold parameter of a secret-sharing scheme after the setup (share distribution) phase, without further communication between the dealer and the shareholders. Previous solutions to this problem require one to start off with a nonstandard scheme designed specifically for this purpose, or to have communication between shareholders. In contrast, we show how to increase the threshold parameter of the standard Shamir secret-sharing scheme without communication between the shareholders. Our technique can thus be applied to existing Shamir schemes even if they were set up without consideration to future threshold increases. Our method is a new positive cryptographic application for lattice reduction algorithms, inspired by recent work on lattice-based list decoding of Reed-Solomon codes with noise bounded in the Lee norm. We use fundamental results from the theory of lattices (geometry of numbers) to prove quantitative statements about the information-theoretic security of our construction. These lattice-based security proof techniques may be of independent interest.
Resumo:
This undergraduate student paper explores usage of mixed reality techniques as support tools for conceptual design. A proof-of-concept was developed to illustrate this principle. Using this as an example, a small group of designers was interviewed to determine their views on the use of this technology. These interviews are the main contribution of this paper. Several interesting applications were determined, suggesting possible usage in a wide range of domains. Paper-based sketching, mixed reality and sketch augmentation techniques complement each other, and the combination results in a highly intuitive interface.
Resumo:
In this paper, we address the control design problem of positioning of over-actuated marine vehicles with control allocation. The proposed design is based on a combined position and velocity loops in a multi-variable anti-windup implementation together with a control allocation mapping. The vehicle modelling is considered with appropriate simplifications related to low-speed manoeuvring hydrodynamics and vehicle symmetry. The control design is considered together with a control allocation mapping. We derive analytical tuning rules based on requirements of closed-loop stability and performance. The anti- windup implementation of the controller is obtained by mapping the actuator-force constraint set into a constraint set for the generalized forces. This approach ensures that actuation capacity is not violated by constraining the generalized control forces; thus, the control allocation is simplified since it can be formulated as an unconstrained problem. The mapping can also be modified on-line based on actuator availability to provide actuator-failure accommodation. We provide a proof of the closed-loop stability and illustrate the performance using simulation scenarios for an open-frame underwater vehicle.
Resumo:
This thesis opens up the design space for awareness research in CSCW and HCI. By challenging the prevalent understanding of roles in awareness processes and exploring different mechanisms for actively engaging users in the awareness process, this thesis provides a better understanding of the complexity of these processes and suggests practical solutions for designing and implementing systems that support active awareness. Mutual awareness, a prominent research topic in the fields of Computer-Supported Cooperative Work (CSCW) and Human-Computer Interaction (HCI) refers to a fundamental aspect of a person’s work: their ability to gain a better understanding of a situation by perceiving and interpreting their co-workers actions. Technologically-mediated awareness, used to support co-workers across distributed settings, distinguishes between the roles of the actor, whose actions are often limited to being the target of an automated data gathering processes, and the receiver, who wants to be made aware of the actors’ actions. This receiver-centric view of awareness, focusing on helping receivers to deal with complex sets of awareness information, stands in stark contrast to our understanding of awareness as social process involving complex interactions between both actors and receivers. It fails to take into account an actors’ intimate understanding of their own activities and the contribution that this subjective understanding could make in providing richer awareness information. In this thesis I challenge the prevalent receiver-centric notion of awareness, and explore the conceptual foundations, design, implementation and evaluation of an alternative active awareness approach by making the following five contributions. Firstly, I identify the limitations of existing awareness research and solicit further evidence to support the notion of active awareness. I analyse ethnographic workplace studies that demonstrate how actors engage in an intricate interplay involving the monitoring of their co-workers progress and displaying aspects of their activities that may be of relevance to others. The examination of a large body of awareness research reveals that while disclosing information is a common practice in face-to-face collaborative settings it has been neglected in implementations of technically mediated awareness. Based on these considerations, I introduce the notion of intentional disclosure to describe the action of users actively and deliberately contributing awareness information. I consider challenges and potential solutions for the design of active awareness. I compare a range of systems, each allowing users to share information about their activities at various levels of detail. I discuss one of the main challenges to active awareness: that disclosing information about activities requires some degree of effort. I discuss various representations of effort in collaborative work. These considerations reveal that there is a trade-off between the richness of awareness information and the effort required to provide this information. I propose a framework for active awareness, aimed to help designers to understand the scope and limitations of different types of intentional disclosure. I draw on the identified richness/effort trade-off to develop two types of intentional disclosure, both of which aim to facilitate the disclosure of information while reducing the effort required to do so. For both of these approaches, direct and indirect disclosure, I delineate how they differ from related approaches and define a set of design criteria that is intended to guide their implementation. I demonstrate how the framework of active awareness can be practically applied by building two proof-of-concept prototypes that implement direct and indirect disclosure respectively. AnyBiff, implementing direct disclosure, allows users to create, share and use shared representations of activities in order to express their current actions and intentions. SphereX, implementing indirect disclosure, represents shared areas of interests or working context, and links sets of activities to these representations. Lastly, I present the results of the qualitative evaluation of the two prototypes and analyse the results with regard to the extent to which they implemented their respective disclosure mechanisms and supported active awareness. Both systems were deployed and tested in real world environments. The results for AnyBiff showed that users developed a wide range of activity representations, some unanticipated, and actively used the system to disclose information. The results further highlighted a number of design considerations relating to the relationship between awareness and communication, and the role of ambiguity. The evaluation of SphereX validated the feasibility of the indirect disclosure approach. However, the study highlighted the challenges of implementing cross-application awareness support and translating the concept to users. The study resulted in design recommendations aimed to improve the implementation of future systems.
Resumo:
We consider the following problem: users of an organization wish to outsource the storage of sensitive data to a large database server. It is assumed that the server storing the data is untrusted so the data stored have to be encrypted. We further suppose that the manager of the organization has the right to access all data, but a member of the organization can not access any data alone. The member must collaborate with other members to search for the desired data. In this paper, we investigate the notion of threshold privacy preserving keyword search (TPPKS) and define its security requirements. We construct a TPPKS scheme and show the proof of security under the assumptions of intractability of discrete logarithm, decisional Diffie-Hellman and computational Diffie-Hellman problems.
Resumo:
This article examines some questions of statutory interpretation as they apply to section 130 of the Land Title Act 1994 (Qld)
Resumo:
Traction force microscopy (TFM) is commonly used to estimate cells’ traction forces from the deformation that they cause on their substrate. The accuracy of TFM highly depends on the computational methods used to measure the deformation of the substrate and estimate the forces, and also on the specifics of the experimental set-up. Computer simulations can be used to evaluate the effect of both the computational methods and the experimental set-up without the need to perform numerous experiments. Here, we present one such TFM simulator that addresses several limitations of the existing ones. As a proof of principle, we recreate a TFM experimental set-up, and apply a classic 2D TFM algorithm to recover the forces. In summary, our simulator provides a valuable tool to study the performance, refine experimentally, and guide the extraction of biological conclusions from TFM experiments.
Resumo:
In vivo osteochondral defect models predominantly consist of small animals, such as rabbits. Although they have an advantage of low cost and manageability, their joints are smaller and more easily healed compared with larger animals or humans. We hypothesized that osteochondral cores from large animals can be implanted subcutaneously in rats to create an ectopic osteochondral defect model for routine and high-throughput screening of multiphasic scaffold designs and/or tissue-engineered constructs (TECs). Bovine osteochondral plugs with 4 mm diameter osteochondral defect were fitted with novel multiphasic osteochondral grafts composed of chondrocyte-seeded alginate gels and osteoblast-seeded polycaprolactone scaffolds, prior to being implanted in rats subcutaneously with bone morphogenic protein-7. After 12 weeks of in vivo implantation, histological and micro-computed tomography analyses demonstrated that TECs are susceptible to mineralization. Additionally, there was limited bone formation in the scaffold. These results suggest that the current model requires optimization to facilitate robust bone regeneration and vascular infiltration into the defect site. Taken together, this study provides a proof-of-concept for a high-throughput osteochondral defect model. With further optimization, the presented hybrid in vivo model may address the growing need for a cost-effective way to screen osteochondral repair strategies before moving to large animal preclinical trials.