638 resultados para Table manipulation (Computer science)
Resumo:
Barreto-Lynn-Scott (BLS) curves are a stand-out candidate for implementing high-security pairings. This paper shows that particular choices of the pairing-friendly search parameter give rise to four subfami- lies of BLS curves, all of which offer highly efficient and implementation- friendly pairing instantiations. Curves from these particular subfamilies are defined over prime fields that support very efficient towering options for the full extension field. The coefficients for a specific curve and its correct twist are automat-ically determined without any computational effort. The choice of an extremely sparse search parameter is immediately reflected by a highly efficient optimal ate Miller loop and final exponentiation. As a resource for implementors, we give a list with examples of implementation-friendly BLS curves through several high-security levels.
Resumo:
Rats are superior to the most advanced robots when it comes to creating and exploiting spatial representations. A wild rat can have a foraging range of hundreds of meters, possibly kilometers, and yet the rodent can unerringly return to its home after each foraging mission, and return to profitable foraging locations at a later date (Davis, et al., 1948). The rat runs through undergrowth and pipes with few distal landmarks, along paths where the visual, textural, and olfactory appearance constantly change (Hardy and Taylor, 1980; Recht, 1988). Despite these challenges the rat builds, maintains, and exploits internal representations of large areas of the real world throughout its two to three year lifetime. While algorithms exist that allow robots to build maps, the questions of how to maintain those maps and how to handle change in appearance over time remain open. The robotic approach to map building has been dominated by algorithms that optimise the geometry of the map based on measurements of distances to features. In a robotic approach, measurements of distance to features are taken with range-measuring devices such as laser range finders or ultrasound sensors, and in some cases estimates of depth from visual information. The features are incorporated into the map based on previous readings of other features in view and estimates of self-motion. The algorithms explicitly model the uncertainty in measurements of range and the measurement of self-motion, and use probability theory to find optimal solutions for the geometric configuration of the map features (Dissanayake, et al., 2001; Thrun and Leonard, 2008). Some of the results from the application of these algorithms have been impressive, ranging from three-dimensional maps of large urban strucutures (Thrun and Montemerlo, 2006) to natural environments (Montemerlo, et al., 2003).
Resumo:
The Lingodroids are a pair of mobile robots that evolve a language for places and relationships between places (based on distance and direction). Each robot in these studies has its own understanding of the layout of the world, based on its unique experiences and exploration of the environment. Despite having different internal representations of the world, the robots are able to develop a common lexicon for places, and then use simple sentences to explain and understand relationships between places even places that they could not physically experience, such as areas behind closed doors. By learning the language, the robots are able to develop representations for places that are inaccessible to them, and later, when the doors are opened, use those representations to perform goal-directed behavior.
Resumo:
Discovering proper search intents is a vi- tal process to return desired results. It is constantly a hot research topic regarding information retrieval in recent years. Existing methods are mainly limited by utilizing context-based mining, query expansion, and user profiling techniques, which are still suffering from the issue of ambiguity in search queries. In this pa- per, we introduce a novel ontology-based approach in terms of a world knowledge base in order to construct personalized ontologies for identifying adequate con- cept levels for matching user search intents. An iter- ative mining algorithm is designed for evaluating po- tential intents level by level until meeting the best re- sult. The propose-to-attempt approach is evaluated in a large volume RCV1 data set, and experimental results indicate a distinct improvement on top precision after compared with baseline models.
Resumo:
Two-party key exchange (2PKE) protocols have been rigorously analyzed under various models considering different adversarial actions. However, the analysis of group key exchange (GKE) protocols has not been as extensive as that of 2PKE protocols. Particularly, an important security attribute called key compromise impersonation (KCI) resilience has been completely ignored for the case of GKE protocols. Informally, a protocol is said to provide KCI resilience if the compromise of the long-term secret key of a protocol participant A does not allow the adversary to impersonate an honest participant B to A. In this paper, we argue that KCI resilience for GKE protocols is at least as important as it is for 2PKE protocols. Our first contribution is revised definitions of security for GKE protocols considering KCI attacks by both outsider and insider adversaries. We also give a new proof of security for an existing two-round GKE protocol under the revised security definitions assuming random oracles. We then show how to achieve insider KCIR in a generic way using a known compiler in the literature. As one may expect, this additional security assurance comes at the cost of an extra round of communication. Finally, we show that a few existing protocols are not secure against outsider KCI attacks. The attacks on these protocols illustrate the necessity of considering KCI resilience for GKE protocols.
Resumo:
The report card for the introductory programming unit at our university has historically been unremarkable in terms of attendance rates, student success rates and student retention in both the unit and the degree course. After a course restructure recently involving a fresh approach to introducing programming, we reported a high retention in the unit, with consistently high attendance and a very low failure rate. Following those encouraging results, we collected student attendance data for several semesters and compared attendance rates to student results. We have found that interesting workshop material which directly relates to course-relevant assessment items and therefore drives the learning, in an engaging collaborative learning environment has improved attendance to an extraordinary extent, with student failure rates plummeting to the lowest in recorded history at our university.
Resumo:
At the previous conference in this series, Corney, Lister and Teague presented research results showing relationships between code writing, code tracing and code explaining, from as early as week 3 of semester. We concluded that the problems some students face in learning to program start very early in the semester. In this paper we report on our replication of that experiment, at two institutions, where one is the same as the original institution. In some cases, we did not find the same relationship between explaining code and writing code, but we believe this was because our teachers discussed the code in lectures between the two tests. Apart from that exception, our replication results at both institutions are consistent with our original study.
Resumo:
Recent research on novice programmers has suggested that they pass through neo-Piagetian stages: sensorimotor, preoperational, and concrete operational stages, before eventually reaching programming competence at the formal operational stage. This paper presents empirical results in support of this neo-Piagetian perspective. The major novel contributions of this paper are empirical results for some exam questions aimed at testing novices for the concrete operational abilities to reason with quantities that are conserved, processes that are reversible, and properties that hold under transitive inference. While the questions we used had been proposed earlier by Lister, he did not present any data for how students performed on these questions. Our empirical results demonstrate that many students struggle to answer these problems, despite the apparent simplicity of these problems. We then compare student performance on these questions with their performance on six explain in plain English questions.
Resumo:
In topological mapping, perceptual aliasing can cause different places to appear indistinguishable to the robot. In case of severely corrupted or non-available odometry information, topological mapping is difficult as the robot is challenged with the loop-closing problem; that is to determine whether it has visited a particular place before. In this article we propose to use neighbourhood information to disambiguate otherwise indistinguishable places. Using neighbourhood information for place disambiguation is an approach that neither depends on a specific choice of sensors nor requires geometric information such as odometry. Local neighbourhood information is extracted from a sequence of observations of visited places. In experiments using either sonar or visual observations from an indoor environment the benefits of using neighbourhood clues for the disambiguation of otherwise identical vertices are demonstrated. Over 90% of the maps we obtain are isomorphic with the ground truth. The choice of the robot’s sensors does not impact the results of the experiments much.
Resumo:
Collaborative question answering (cQA) portals such as Yahoo! Answers allow users as askers or answer authors to communicate, and exchange information through the asking and answering of questions in the network. In their current set-up, answers to a question are arranged in chronological order. For effective information retrieval, it will be advantageous to have the users’ answers ranked according to their quality. This paper proposes a novel approach of evaluating and ranking the users’answers and recommending the top-n quality answers to information seekers. The proposed approach is based on a user-reputation method which assigns a score to an answer reflecting its answer author’s reputation level in the network. The proposed approach is evaluated on a dataset collected from a live cQA, namely, Yahoo! Answers. To compare the results obtained by the non-content-based user-reputation method, experiments were also conducted with several content-based methods that assign a score to an answer reflecting its content quality. Various combinations of non-content and content-based scores were also used in comparing results. Empirical analysis shows that the proposed method is able to rank the users’ answers and recommend the top-n answers with good accuracy. Results of the proposed method outperform the content-based methods, various combinations, and the results obtained by the popular link analysis method, HITS.
Resumo:
This paper is directed towards providing an answer to the question, ”Can you control the trajectory of a Lagrangian float?” Being a float that has minimal actuation (only buoyancy control), their horizontal trajectory is dictated through drifting with ocean currents. However, with the appropriate vertical actuation and utilising spatio-temporal variations in water speed and direction, we show here that broad controllabilty results can be met such as waypoint following to keep a float inside of a bay or out of a designated region. This paper extends theory experimen- tally evaluted on horizontally actuated Autonomous Underwater Vehicles (AUVs) for trajectory control utilising ocean forecast models and presents an initial investi- gation into the controllability of these minimally actuated drifting AUVs. Simulated results for offshore coastal and within highly dynamic tidal bays illustrate two tech- niques with the promise for an affirmative answer to the posed question above.
Resumo:
In the last few years we have observed a proliferation of approaches for clustering XML docu- ments and schemas based on their structure and content. The presence of such a huge amount of approaches is due to the different applications requiring the XML data to be clustered. These applications need data in the form of similar contents, tags, paths, structures and semantics. In this paper, we first outline the application contexts in which clustering is useful, then we survey approaches so far proposed relying on the abstract representation of data (instances or schema), on the identified similarity measure, and on the clustering algorithm. This presentation leads to draw a taxonomy in which the current approaches can be classified and compared. We aim at introducing an integrated view that is useful when comparing XML data clustering approaches, when developing a new clustering algorithm, and when implementing an XML clustering compo- nent. Finally, the paper moves into the description of future trends and research issues that still need to be faced.
Resumo:
Autonomous guidance of agricultural vehiclesis vital as mechanized farming production becomes more prevalent. It is crucial that tractor-trailers are guided with accuracy in both lateral and longitudinal directions, whilst being affected by large disturbance forces, or slips, owing to uncertain and undulating terrain. Successful research has been concentrated on trajectory control which can provide longitudinal and lateral accuracy if the vehicle moves without sliding, and the trailer is passive. In this paper, the problem of robust trajectory tracking along straight and circular paths of a tractor-steerable trailer is addressed. By utilizing a robust combination of backstepping and nonlinear PI control, a robust, nonlinear controller is proposed. For vehicles subjected to sliding, the proposed controller makes the lateral deviations and the orientation errors of the tractor and trailer converge to a neighborhood near the origin. Simulation results are presented to illustrate that the suggested controller ensures precise trajectory tracking in the presence of slip.
Resumo:
Nowadays, everyone can effortlessly access a range of information on the World Wide Web (WWW). As information resources on the web continue to grow tremendously, it becomes progressively more difficult to meet high expectations of users and find relevant information. Although existing search engine technologies can find valuable information, however, they suffer from the problems of information overload and information mismatch. This paper presents a hybrid Web Information Retrieval approach allowing personalised search using ontology, user profile and collaborative filtering. This approach finds the context of user query with least user’s involvement, using ontology. Simultaneously, this approach uses time-based automatic user profile updating with user’s changing behaviour. Subsequently, this approach uses recommendations from similar users using collaborative filtering technique. The proposed method is evaluated with the FIRE 2010 dataset and manually generated dataset. Empirical analysis reveals that Precision, Recall and F-Score of most of the queries for many users are improved with proposed method.
Resumo:
Bystander is a multi-user, immersive, interactive environment intended for public display in a museum or art gallery. It is designed to make available heritage collections in novel and culturally responsible ways. We use its development as a case study to examine the role played in that process by a range of tools and techniques from participatory design traditions. We describe how different tools were used within the design process, specifically: the ways in which the potential audience members were both included and represented; the prototypes that have been constructed as a way of envisioning how the final work might be experienced; and how these tools have been brought together in ongoing designing and evaluation. We close the paper with some reflections on the extension of participatory commitments into still-emerging areas of technology design that prioritise the design of spaces for human experience and reflective interaction.