388 resultados para lab computers


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Typically, the walking ability of individuals with a transfemoral amputation (TFA) can be represented by the speed of walking (SofW) obtained in experimental settings. Recent developments in portable kinetic systems allow assessing the level of activity of TFA during actual daily living outside the confined space of a gait lab. Unfortunately, only minimal spatio-temporal characteristics could be extracted from the kinetic data including the cadence and the duration on gait cycles. Therefore, there is a need for a way to use some of these characteristics to assess the instantaneous speed of walking during daily living. The purpose of the study was to compare several methods to determine SofW using minimal spatial gait characteristics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Timely reporting, effective analyses and rapid distribution of surveillance data can assist in detecting the aberration of disease occurrence and further facilitate a timely response. In China, a new nationwide web-based automated system for outbreak detection and rapid response was developed in 2008. The China Infectious Disease Automated-alert and Response System (CIDARS) was developed by the Chinese Center for Disease Control and Prevention based on the surveillance data from the existing electronic National Notifiable Infectious Diseases Reporting Information System (NIDRIS) started in 2004. NIDRIS greatly improved the timeliness and completeness of data reporting with real time reporting information via the Internet. CIDARS further facilitates the data analysis, aberration detection, signal dissemination, signal response and information communication needed by public health departments across the country. In CIDARS, three aberration detection methods are used to detect the unusual occurrence of 28 notifiable infectious diseases at the county level and to transmit that information either in real-time or on a daily basis. The Internet, computers and mobile phones are used to accomplish rapid signal generation and dissemination, timely reporting and reviewing of the signal response results. CIDARS has been used nationwide since 2008; all Centers for Disease Control and Prevention (CDC) in China at the county, prefecture, provincial and national levels are involved in the system. It assists with early outbreak detection at the local level and prompts reporting of unusual disease occurrences or potential outbreaks to CDCs throughout the country.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Resource assignment and scheduling is a difficult task when job processing times are stochastic, and resources are to be used for both known and unknown demand. To operate effectively within such an environment, several novel strategies are investigated. The first focuses upon the creation of a robust schedule, and utilises the concept of strategically placed idle time (i.e. buffering). The second approach introduces the idea of maintaining a number of free resources at each time, and culminates in another form of strategically placed buffering. The attraction of these approaches is that they are easy to grasp conceptually, and mimic what practitioners already do in practice. Our extensive numerical testing has shown that these techniques ensure more prompt job processing, and reduced job cancellations and waiting time. They are effective in the considered setting and could easily be adapted for many real life problems, for instance those in health care. This article has more importantly demonstrated that integrating the two approaches is a better strategy and will provide an effective stochastic scheduling approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Libraries have often been first adopters of many new technological innovations, such as, punch cards, computers, barcodes, and e-book readers. It is thus not surprising that many libraries have embraced the advent of the internet as an opportunity to move away from just being repositories of books, towards becoming ideas stores and local network hubs for entrepreneurial thinking and new creative practices. This presentation will look at the case of “The Edge” – an initiative of the State Library of Queensland in Brisbane, Australia, to establish a digital culture centre and learning environment deliberately designed for the co-creation and co-construction of knowledge. This initiative illustrates the potential role of libraries as testing grounds for new technologies and technological practices, which is particularly relevant in the context of the NBN rollout across Australia. It also provides an example of new engagement strategies for innovative co-working spaces that are a vital element in a trend that sees professionals, creatives and designers leave their traditional places of work and embrace the city as their office.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Preface The 9th Australasian Conference on Information Security and Privacy (ACISP 2004) was held in Sydney, 13–15 July, 2004. The conference was sponsored by the Centre for Advanced Computing – Algorithms and Cryptography (ACAC), Information and Networked Security Systems Research (INSS), Macquarie University and the Australian Computer Society. The aims of the conference are to bring together researchers and practitioners working in areas of information security and privacy from universities, industry and government sectors. The conference program covered a range of aspects including cryptography, cryptanalysis, systems and network security. The program committee accepted 41 papers from 195 submissions. The reviewing process took six weeks and each paper was carefully evaluated by at least three members of the program committee. We appreciate the hard work of the members of the program committee and external referees who gave many hours of their valuable time. Of the accepted papers, there were nine from Korea, six from Australia, five each from Japan and the USA, three each from China and Singapore, two each from Canada and Switzerland, and one each from Belgium, France, Germany, Taiwan, The Netherlands and the UK. All the authors, whether or not their papers were accepted, made valued contributions to the conference. In addition to the contributed papers, Dr Arjen Lenstra gave an invited talk, entitled Likely and Unlikely Progress in Factoring. This year the program committee introduced the Best Student Paper Award. The winner of the prize for the Best Student Paper was Yan-Cheng Chang from Harvard University for his paper Single Database Private Information Retrieval with Logarithmic Communication. We would like to thank all the people involved in organizing this conference. In particular we would like to thank members of the organizing committee for their time and efforts, Andrina Brennan, Vijayakrishnan Pasupathinathan, Hartono Kurnio, Cecily Lenton, and members from ACAC and INSS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Special Issue of Interacting with Computers, 2015 showcases the current state of the art in intuitive interaction research. Several papers have showcased new potential methods for both applying and assessing intuitive interaction during early and later phases of the design process. Diefenbach and Ullrich present a new, alternative framework for intuitive interaction, comprised of the four components of gut feeling, verbalizability. Fischer and colleagues paper also reported on an experiment in applying image schemas but in this case they aimed to find a more efficient way of discovering and applying them, in order to find ways to improve the design process as well as assessment of new interfaces. Still and co-researchers had a similar aim, that of establishing what levels and types of knowledge can be most easily and accurately elicited from users in order to be applied to new interfaces. Hespanhol and Tomitsch described strategies for intuitive interaction in public urban spaces. Macaranas and colleagues described an experiment which tested three different full body gestural interfaces to establish which types of mappings were more intuitive, one based on images schemas and two on different previously encountered features from other types of interfaces.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The mining industry presents us with a number of ideal applications for sensor based machine control because of the unstructured environment that exists within each mine. The aim of the research presented here is to increase the productivity of existing large compliant mining machines by retrofitting with enhanced sensing and control technology. The current research focusses on the automatic control of the swing motion cycle of a dragline and an automated roof bolting system. We have achieved: * closed-loop swing control of an one-tenth scale model dragline; * single degree of freedom closed-loop visual control of an electro-hydraulic manipulator in the lab developed from standard components.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Games and activities, often involving aspects of pretence and fantasy play, are an essential aspect of everyday preschool life for many young children. Young children’s spontaneous play activities can be understood as social life in action. Increasingly, young children’s games and activities involve their engagement in pretence using play props to represent computers, laptops and other pieces of technology equipment. In this way, pretend play becomes a context for engaging with matters from the real world. There are a number of studies investigating school-aged children engaging in gaming and other online activities, but less is known about what young children are doing with online technologies. Drawing on Australian Research Council funded research of children engaging with technologies at home and school, this chapter investigates how young children use technologies in everyday life by showing how they draw on props, both real or imaginary, to support their play activities. An ethnomethodological approach using conversation analysis is used to explore how children’s gestures, gaze and talk work to introduce ideas and activities. This chapter contributes to understandings of how children’s play intersects with technologies and pretend play.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective This paper presents an automatic active learning-based system for the extraction of medical concepts from clinical free-text reports. Specifically, (1) the contribution of active learning in reducing the annotation effort, and (2) the robustness of incremental active learning framework across different selection criteria and datasets is determined. Materials and methods The comparative performance of an active learning framework and a fully supervised approach were investigated to study how active learning reduces the annotation effort while achieving the same effectiveness as a supervised approach. Conditional Random Fields as the supervised method, and least confidence and information density as two selection criteria for active learning framework were used. The effect of incremental learning vs. standard learning on the robustness of the models within the active learning framework with different selection criteria was also investigated. Two clinical datasets were used for evaluation: the i2b2/VA 2010 NLP challenge and the ShARe/CLEF 2013 eHealth Evaluation Lab. Results The annotation effort saved by active learning to achieve the same effectiveness as supervised learning is up to 77%, 57%, and 46% of the total number of sequences, tokens, and concepts, respectively. Compared to the Random sampling baseline, the saving is at least doubled. Discussion Incremental active learning guarantees robustness across all selection criteria and datasets. The reduction of annotation effort is always above random sampling and longest sequence baselines. Conclusion Incremental active learning is a promising approach for building effective and robust medical concept extraction models, while significantly reducing the burden of manual annotation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In early stages of design and modeling, computers and computer applications are often considered an obstacle, rather than a facilitator of the process. Most notably, brainstorms, process modeling with business experts, or development planning, are often performed by a team in front of a whiteboard. While "whiteboarding" is recognized as an effective tool, low-tech solutions that allow remote participants to contribute are still not generally available. This is a striking observation, considering that vast majority of teams in large organizations are distributed teams. And this has also been one of the key triggers behind the project described in this article, where a team of corporate researchers decided to identify state of the art technologies that could facilitate the scenario mentioned above. This paper is an account of a research project in the area of enterprise collaboration, with a strong focus on the aspects of human computer interaction in mixed mode environments, especially in areas of collaboration where computers still play a secondary role. It is describing a currently running corporate research project. © 2012 Springer-Verlag.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Creative and ad-hoc work often involves non-digital artifacts, such as whiteboards and post-it notes. The preferred method of brainstorming and idea development, while facilitating work among collocated participants, makes it particularly tricky to involve remote participants, not even mentioning cases where live social involvement is required and the number and location of remote participants can be vast. Our work has originally focused on large distributed teams in business entities. Vast majority of teams in large organizations are distributed teams. Our team of corporate researchers decided to identify state of the art technologies that could facilitate the scenarios mentioned above. This paper is an account of a research project in the area of enterprise collaboration, with a strong focus on the aspects of human computer interaction in mixed mode environments, especially in areas of collaboration where computers still play a secondary role. It is describing a currently running corporate research project. In this paper we signal the potential use of the technology in situation, where community involvement is either required or desirable. The goal of the paper is to initiate a discussion on the use of technologies, initially designed as supporting enterprise collaboration, in situation requiring community engagement. In other words, it is a contribution of technically focused research exploring the uses of the technology in areas such as social engagement and community involvement. © 2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lattice-based cryptographic primitives are believed to offer resilience against attacks by quantum computers. We demonstrate the practicality of post-quantum key exchange by constructing cipher suites for the Transport Layer Security (TLS) protocol that provide key exchange based on the ring learning with errors (R-LWE) problem, we accompany these cipher suites with a rigorous proof of security. Our approach ties lattice-based key exchange together with traditional authentication using RSA or elliptic curve digital signatures: the post-quantum key exchange provides forward secrecy against future quantum attackers, while authentication can be provided using RSA keys that are issued by today's commercial certificate authorities, smoothing the path to adoption. Our cryptographically secure implementation, aimed at the 128-bit security level, reveals that the performance price when switching from non-quantum-safe key exchange is not too high. With our R-LWE cipher suites integrated into the Open SSL library and using the Apache web server on a 2-core desktop computer, we could serve 506 RLWE-ECDSA-AES128-GCM-SHA256 HTTPS connections per second for a 10 KiB payload. Compared to elliptic curve Diffie-Hellman, this means an 8 KiB increased handshake size and a reduction in throughput of only 21%. This demonstrates that provably secure post-quantum key-exchange can already be considered practical.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In his 1987 book, The Media Lab: Inventing the Future at MIT, Stewart Brand provides an insight into the visions of the future of the media in the 1970s and 1980s. 1 He notes that Nicolas Negroponte made a compelling case for the foundation of a media laboratory at MIT with diagrams detailing the convergence of three sectors of the media—the broadcast and motion picture industry; the print and publishing industry; and the computer industry. Stewart Brand commented: ‘If Negroponte was right and communications technologies really are converging, you would look for signs that technological homogenisation was dissolving old boundaries out of existence, and you would expect an explosion of new media where those boundaries used to be’. Two decades later, technology developers, media analysts and lawyers have become excited about the latest phase of media convergence. In 2006, the faddish Time Magazine heralded the arrival of various Web 2.0 social networking services: You can learn more about how Americans live just by looking at the backgrounds of YouTube videos—those rumpled bedrooms and toy‐strewn basement rec rooms—than you could from 1,000 hours of network television. And we didn’t just watch, we also worked. Like crazy. We made Facebook profiles and Second Life avatars and reviewed books at Amazon and recorded podcasts. We blogged about our candidates losing and wrote songs about getting dumped. We camcordered bombing runs and built open‐source software. America loves its solitary geniuses—its Einsteins, its Edisons, its Jobses—but those lonely dreamers may have to learn to play with others. Car companies are running open design contests. Reuters is carrying blog postings alongside its regular news feed. Microsoft is working overtime to fend off user‐created Linux. We’re looking at an explosion of productivity and innovation, and it’s just getting started, as millions of minds that would otherwise have drowned in obscurity get backhauled into the global intellectual economy. The magazine announced that Time’s Person of the Year was ‘You’, the everyman and everywoman consumer ‘for seizing the reins of the global media, for founding and framing the new digital democracy, for working for nothing and beating the pros at their own game’. This review essay considers three recent books, which have explored the legal dimensions of new media. In contrast to the unbridled exuberance of Time Magazine, this series of legal works displays an anxious trepidation about the legal ramifications associated with the rise of social networking services. In his tour de force, The Future of Reputation: Gossip, Rumor, and Privacy on the Internet, Daniel Solove considers the implications of social networking services, such as Facebook and YouTube, for the legal protection of reputation under privacy law and defamation law. Andrew Kenyon’s edited collection, TV Futures: Digital Television Policy in Australia, explores the intersection between media law and copyright law in the regulation of digital television and Internet videos. In The Future of the Internet and How to Stop It, Jonathan Zittrain explores the impact of ‘generative’ technologies and ‘tethered applications’—considering everything from the Apple Mac and the iPhone to the One Laptop per Child programme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The care processes of healthcare providers are typically considered as human-centric, flexible, evolving, complex and multi-disciplinary. Consequently, acquiring an insight in the dynamics of these care processes can be an arduous task. A novel event log based approach for extracting valuable medical and organizational information on past executions of the care processes is presented in this study. Care processes are analyzed with the help of a preferential set of process mining techniques in order to discover recurring patterns, analyze and characterize process variants and identify adverse medical events.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An intrinsic challenge associated with evaluating proposed techniques for detecting Distributed Denial-of-Service (DDoS) attacks and distinguishing them from Flash Events (FEs) is the extreme scarcity of publicly available real-word traffic traces. Those available are either heavily anonymised or too old to accurately reflect the current trends in DDoS attacks and FEs. This paper proposes a traffic generation and testbed framework for synthetically generating different types of realistic DDoS attacks, FEs and other benign traffic traces, and monitoring their effects on the target. Using only modest hardware resources, the proposed framework, consisting of a customised software traffic generator, ‘Botloader’, is capable of generating a configurable mix of two-way traffic, for emulating either large-scale DDoS attacks, FEs or benign traffic traces that are experimentally reproducible. Botloader uses IP-aliasing, a well-known technique available on most computing platforms, to create thousands of interactive UDP/TCP endpoints on a single computer, each bound to a unique IP-address, to emulate large numbers of simultaneous attackers or benign clients.