1000 resultados para File size


Relevância:

60.00% 60.00%

Publicador:

Resumo:

If the Internet could be used as a method of transmitting ultrasound images taken in the field quickly and effectively, it would bring tertiary consultation to even extremely remote centres. The aim of the study was to evaluate the maximum degree of compression of fetal ultrasound video-recordings that would not compromise signal quality. A digital fetal ultrasound videorecording of 90 s was produced, resulting in a file size of 512 MByte. The file was compressed to 2, 5 and 10 MByte. The recordings were viewed by a panel of four experienced observers who were blinded to the compression ratio used. Using a simple seven-point scoring system, the observers rated the quality of the clip on 17 items. The maximum compression ratio that was considered clinically acceptable was found to be 1:50-1:100. This produced final file sizes of 5-10 MByte, corresponding to a screen size of 320 x 240 pixels, running at 15 frames/s. This study expands the possibilities for providing tertiary perinatal services to the wider community.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach - data stay mostly in the CSV files; "zero configuration" - no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wednesday 23rd April 2014 Speaker(s): Willi Hasselbring Organiser: Leslie Carr Time: 23/04/2014 11:00-11:50 Location: B32/3077 File size: 669 Mb Abstract For good scientific practice, it is important that research results may be properly checked by reviewers and possibly repeated and extended by other researchers. This is of particular interest for "digital science" i.e. for in-silico experiments. In this talk, I'll discuss some issues of how software systems and services may contribute to good scientific practice. Particularly, I'll present our PubFlow approach to automate publication workflows for scientific data. The PubFlow workflow management system is based on established technology. We integrate institutional repository systems (based on EPrints) and world data centers (in marine science). PubFlow collects provenance data automatically via our monitoring framework Kieker. Provenance information describes the origins and the history of scientific data in its life cycle, and the process by which it arrived. Thus, provenance information is highly relevant to repeatability and trustworthiness of scientific results. In our evaluation in marine science, we collaborate with the GEOMAR Helmholtz Centre for Ocean Research Kiel.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wednesday 23rd April 2014 Speaker(s): Willi Hasselbring Organiser: Leslie Carr Time: 23/04/2014 14:00-15:00 Location: B32/3077 File size: 802Mb Abstract The internal behavior of large-scale software systems cannot be determined on the basis of static (e.g., source code) analysis alone. Kieker provides complementary dynamic analysis capabilities, i.e., monitoring/profiling and analyzing a software system's runtime behavior. Application Performance Monitoring is concerned with continuously observing a software system's performance-specific runtime behavior, including analyses like assessing service level compliance or detecting and diagnosing performance problems. Architecture Discovery is concerned with extracting architectural information from an existing software system, including both structural and behavioral aspects like identifying architectural entities (e.g., components and classes) and their interactions (e.g., local or remote procedure calls). In addition to the Architecture Discovery of Java systems, Kieker supports Architecture Discovery for other platforms, including legacy systems, for instance, inplemented in C#, C++, Visual Basic 6, COBOL or Perl. Thanks to Kieker's extensible architecture it is easy to implement and use custom extensions and plugins. Kieker was designed for continuous monitoring in production systems inducing only a very low overhead, which has been evaluated in extensive benchmark experiments. Please, refer to http://kieker-monitoring.net/ for more information.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tuesday 22nd April 2014 Speaker(s): Sue Sentance Organiser: Leslie Carr Time: 22/04/2014 15:00-16:00 Location: B32/3077 File size: 698 Mb Abstract Until recently, "computing" education in English schools mainly focused on developing general Digital Literacy and Microsoft Office skills. As of this September, a new curriculum comes into effect that provides a strong emphasis on computation and programming. This change has generated some controversy in the news media (4-year-olds being forced to learn coding! boss of the government’s coding education initiative cannot code shock horror!!!!) and also some concern in the teaching profession (how can we possibly teach programming when none of the teachers know how to program)? Dr Sue Sentance will explain the work of Computing At School, a part of the BCS Academy, in galvanising universities to help teachers learn programming and other computing skills. Come along and find out about the new English Computing Revolution - How will your children and your schools be affected? - How will our University intake change? How will our degrees have to change? - What is happening to the national perception of Computer Science?

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wednesday 9th April 2014 Speaker(s): Guus Schreiber Time: 09/04/2014 11:00-11:50 Location: B32/3077 File size: 546Mb Abstract In this talk I will discuss linked data for museums, archives and libraries. This area is known for its knowledge-rich and heterogeneous data landscape. The objects in this field range from old manuscripts to recent TV programs. Challenges in this field include common metadata schema's, inter-linking of the omnipresent vocabularies, cross-collection search strategies, user-generated annotations and object-centric versus event-centric views of data. This work can be seen as part of the rapidly evolving field of digital humanities. Speaker Biography Guus Schreiber Guus is a professor of Intelligent Information Systems at the Department of Computer Science at VU University Amsterdam. Guus’ research interests are mainly in knowledge and ontology engineering with a special interest for applications in the field of cultural heritage. He was one of the key developers of the CommonKADS methodology. Guus acts as chair of W3C groups for Semantic Web standards such as RDF, OWL, SKOS and REFa. His research group is involved in a wide range of national and international research projects. He is now project coordinator of the EU Integrated project No Tube concerned with integration of Web and TV data with the help of semantics and was previously Scientific Director of the EU Network of Excellence “Knowledge Web”.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wednesday 26th March 2014 Speaker(s): Dr Trung Dong Huynh Organiser: Dr Tim Chown Time: 26/03/2014 11:00-11:50 Location: B32/3077 File size: 349Mb Abstract Understanding the dynamics of a crowdsourcing application and controlling the quality of the data it generates is challenging, partly due to the lack of tools to do so. Provenance is a domain-independent means to represent what happened in an application, which can help verify data and infer their quality. It can also reveal the processes that led to a data item and the interactions of contributors with it. Provenance patterns can manifest real-world phenomena such as a significant interest in a piece of content, providing an indication of its quality, or even issues such as undesirable interactions within a group of contributors. In this talk, I will present an application-independent methodology for analysing provenance graphs, constructed from provenance records, to learn about such patterns and to use them for assessing some key properties of crowdsourced data, such as their quality, in an automated manner. I will also talk about CollabMap (www.collabmap.org), an online crowdsourcing mapping application, and show how we applied the approach above to the trust classification of data generated by the crowd, achieving an accuracy over 95%.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wednesday 19th March 2014 Speaker(s): Kirk Martinez, Dr Jonathon S Hare and Dr Enrico Costanza Organiser: Dr Tim Chown Time: 19/03/2014 11:00-11:50 Location: B32/3077 File size: 676 Mb Abstract The new WAIS seminar series features classic seminars, research discussions, tutorial-style presentations, and research debates. This seminar takes the form of a research discussion which will focus on the Internet of Things (IoT) research being undertaken in WAIS and other research groups in ECS. IoT is a significant emerging research area, with funding for research available from many channels including new H2020 programmes and the TSB. We have seen examples of IoT devices being built in WAIS and other ECS groups, e.g. in sensor networking, energy monitoring via Zigbee devices, and of course Erica the Rhino (a Big Thing!). The goal of the session is to briefly present such examples of existing Things in our lab with the intent of seeding discussion on open research questions, and therefore future work we could do towards new Things being deployed for experimentation in Building 32 or its environs. The session will discuss what 'things' we have, how they work, what new 'things' might we want to create and deploy, what components we might need to enable this, and how we might interact with these objects.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wednesday 12th March 2014 Speaker(s): Dr Tim Chown Organiser: Time: 12/03/2014 11:00-11:50 Location: B32/3077 File size: 642 Mb Abstract The WAIS seminar series is designed to be a blend of classic seminars, research discussions, debates and tutorials. The Domain Name System (DNS) is a critical part of the Internet infrastructure. In this talk we begin by explaining the basic model of operation of the DNS, including how domain names are delegated and how a DNS resolver performs a DNS lookup. We then take a tour of DNS-related topics, including caching, poisoning, governance, the increasing misuse of the DNS in DDoS attacks, and the expansion of the DNS namespace to new top level domains and internationalised domain names. We also present the latest work in the IETF on DNS privacy. The talk will be pitched such that no detailed technical knowledge is required. We hope that attendees will gain some familiarity with how the DNS works, some key issues surrounding DNS operation, and how the DNS might touch on various areas of research within WAIS.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Wednesday 2nd April 2014 Speaker(s): Stefan Decker Time: 02/04/2014 11:00-11:50 Location: B2/1083 File size: 897 Mb Abstract Ontologies have been promoted and used for knowledge sharing. Several models for representing ontologies have been developed in the Knowledge Representation field, in particular associated with the Semantic Web. In my talk I will summarise developments so far, and will argue that the currently advocated approaches miss certain basic properties of current distributed information sharing infrastructures (read: the Web and the Internet). I will sketch an alternative model aiming to support knowledge sharing and re-use on a global basis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aim: To assess the influence of cervical preparation on fracture susceptibility of roots. Material and methods: During root canal instrumentation, the cervical portions were prepared with different taper instruments: I: no cervical preparation; II: #30/.08; III: #30/.10; IV: #70/.12. The specimens were sealed with the following filling materials (n = 8), A: unfilled; B: Endofill/gutta-percha; C: AH Plus/gutta-percha; D: Epiphany SE/Resilon. For the fracture resistance test, a universal testing machine was used at 1 mm per minute. Results: anova demonstrated difference (P < 0.05) between taper instruments with a higher value for group I (205.3 +/- 77.5 N) followed by II (185.2 +/- 70.8 N), III (164.8 +/- 48.9 N), and IV (156.7 +/- 41.4 N). There was no difference (P > 0.05) between filling materials A (189.1 +/- 66.3 N), B (186.3 +/- 61.0 N), C (159.7 +/- 69.9 N), and D (176.9 +/- 55.2 N). Conclusions: Greater cervical wear using a #70/.12 file increased the root fracture susceptibility, and the tested filling materials were not able to restore resistance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This data set presents a comprehensive characterisation of the sedimentary structures from important groundwater hosting formations in Germany (Herten aquifer analog) and Brazil (Descalvado aquifer analog). Multiple 2-D outcrop faces are described in terms of hydraulic, thermal and chemical properties and interpolated in 3D using stochastic techniques. For each aquifer analog, multiple 3D realisations of the facies heterogeneity are provided using different stochastic simulations settings. These are unique analogue data sets that can be used by the wider community to implement approaches for characterising aquifer formations.