14 resultados para Desktop


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The efficient development of multi-threaded software has, for many years, been an unsolved problem in computer science. Finding a solution to this problem has become urgent with the advent of multi-core processors. Furthermore, the problem has become more complicated because multi-cores are everywhere (desktop, laptop, embedded system). As such, they execute generic programs which exhibit very different characteristics than the scientific applications that have been the focus of parallel computing in the past.
Implicitly parallel programming is an approach to parallel pro- gramming that promises high productivity and efficiency and rules out synchronization errors and race conditions by design. There are two main ingredients to implicitly parallel programming: (i) a con- ventional sequential programming language that is extended with annotations that describe the semantics of the program and (ii) an automatic parallelizing compiler that uses the annotations to in- crease the degree of parallelization.
It is extremely important that the annotations and the automatic parallelizing compiler are designed with the target application do- main in mind. In this paper, we discuss the Paralax approach to im- plicitly parallel programming and we review how the annotations and the compiler design help to successfully parallelize generic programs. We evaluate Paralax on SPECint benchmarks, which are a model for such programs, and demonstrate scalable speedups, up to a factor of 6 on 8 cores.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The document draws largely on the results of research carried out by Hugh McNally and Dominic Morris of McNally Morris Architects and Keith McAllister of Queen’s University Belfast between 2012 and 2013. The objective of the study was to obtain a greater understanding of the impact that architecture and the built environment can have on people with autism spectrum disorder (ASD). The investigation into the subject centred on parents of young children with ASD in the belief that they are most likely to have an intimate knowledge of the issues that affect their children and are relatively well positioned to communicate those issues.

The study comprised a number of components.

- Focus Group Discussions with parents of children with ASD
- A Postal Questionnaire completed by parents of children with ASD
- A Comprehensive Desktop study of contemporary research into the relationship between ASD and aspects of the built environment.

Social stories are then used to help illustrate the world of a child with ASD to the reader and identify a series of potential difficulties for the pupil with ASD in a primary school setting. Design considerations and mitigating measures are then proposed for each difficulty.

The intention is that the document will raise awareness of some of the issues affecting primary school children with ASD and generate discourse among those whose task it is to provide an appropriate learning environment for all children. This includes teachers, health professionals, architects, parents, carers, school boards, government bodies and those with ASD themselves.

While this document uses the primary school as a lens through which to view some of the issues associated with ASD, it is the authors’ contention that the school can be seen as a “microcosm” for the wider world and that lessons taken from the learning environment can be applied elsewhere. The authors therefore hope that the document will help raise awareness of the myriad of issues for those with ASD that are embedded in the vast landscape of urban configurations and building types making up the spatial framework of our society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract. Modern business practices in engineering are increasingly turning to post manufacture service provision in an attempt to generate additional revenue streams and ensure commercial sustainability. Maintainability has always been a consideration during the design process but in the past it has been generally considered to be of tertiary importance behind manufacturability and primary product function in terms of design priorities. The need to draw whole life considerations into concurrent engineering (CE) practice has encouraged companies to address issues such as maintenance, earlier in the design process giving equal importance to all aspects of the product lifecycle. The consideration of design for maintainability (DFM) early in the design process has the potential to significantly reduce maintenance costs, and improve overall running efficiencies as well as safety levels. However a lack of simulation tools still hinders the adaptation of CE to include practical elements of design and therefore further research is required to develop methods by which ‘hands on’ activities such as maintenance can be fully assessed and optimised as concepts develop. Virtual Reality (VR) has the potential to address this issue but the application of these traditionally high cost systems can require complex infrastructure and their use has typically focused on aesthetic aspects of mature designs. This paper examines the application of cost effective VR technology to the rapid assessment of aircraft interior inspection during conceptual design. It focuses on the integration of VR hardware with a typical desktop engineering system and examines the challenges with data transfer, graphics quality and the development of practical user functions within the VR environment. Conclusions drawn to date indicate that the system has the potential to improve maintenance planning through the provision of a usable environment for inspection which is available as soon as preliminary structural models are generated as part of the conceptual design process. Challenges still exist in the efficient transfer of data between the CAD and VR environments as well as the quantification of any benefits that result from the proposed approach. The result of this research will help to improve product maintainability, reduce product development cycle times and lower maintenance costs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The commonly used British Standard constant head triaxial permeability test for testing of fine-grained soils is relatively time consuming. A reduction in the required time for soil permeability testing would provide potential cost savings to the construction industry, particularly in the construction quality assurance of landfill clay liners. The purpose of this paper is to evaluate an alternative approach of measuring permeability of fine-grained soils benefiting from accelerated time scaling for seepage flow when testing specimens in elevated gravity conditions provided by a centrifuge. As part of the investigation, an apparatus was designed and produced to measure water flow through soil samples under conditions of elevated gravitational acceleration using a small desktop laboratory centrifuge. A membrane was used to hydrostatically confine the test sample. A miniature data acquisition system was designed and incorporated in the apparatus to monitor and record changes in head and flow throughout the tests. Under enhanced gravity in the centrifuge, the flow through the sample was under ‘variable head' conditions as opposed to ‘constant head' conditions as in the classic constant head permeability tests conducted at 1 g . A mathematical model was developed for analysis of Darcy's coefficient of permeability under conditions of elevated gravitational acceleration and verified using the results obtained. The test data compare well with the results on analogous samples obtained using the classical British Standard constant head permeability tests.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Modern cancer research often involves large datasets and the use of sophisticated statistical techniques. Together these add a heavy computational load to the analysis, which is often coupled with issues surrounding data accessibility. Connectivity mapping is an advanced bioinformatic and computational technique dedicated to therapeutics discovery and drug re-purposing around differential gene expression analysis. On a normal desktop PC, it is common for the connectivity mapping task with a single gene signature to take >2h to complete using sscMap, a popular Java application that runs on standard CPUs (Central Processing Units). Here, we describe new software, cudaMap, which has been implemented using CUDA C/C++ to harness the computational power of NVIDIA GPUs (Graphics Processing Units) to greatly reduce processing times for connectivity mapping.

Results: cudaMap can identify candidate therapeutics from the same signature in just over thirty seconds when using an NVIDIA Tesla C2050 GPU. Results from the analysis of multiple gene signatures, which would previously have taken several days, can now be obtained in as little as 10 minutes, greatly facilitating candidate therapeutics discovery with high throughput. We are able to demonstrate dramatic speed differentials between GPU assisted performance and CPU executions as the computational load increases for high accuracy evaluation of statistical significance.

Conclusion: Emerging 'omics' technologies are constantly increasing the volume of data and information to be processed in all areas of biomedical research. Embracing the multicore functionality of GPUs represents a major avenue of local accelerated computing. cudaMap will make a strong contribution in the discovery of candidate therapeutics by enabling speedy execution of heavy duty connectivity mapping tasks, which are increasingly required in modern cancer research. cudaMap is open source and can be freely downloaded from http://purl.oclc.org/NET/cudaMap.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An unusual application of hydrological understanding to a police search is described. The lacustrine search for a missing person provided reports of bottom-water currents in the lake and contradictory indications from cadaver dogs. A hydrological model of the area was developed using pre-existing information from side scan sonar, a desktop hydrogeological study and deployment of water penetrating radar (WPR). These provided a hydrological theory for the initial search involving subaqueous groundwater flow, focused on an area of bedrock surrounded by sediment, on the lake floor. The work shows the value a hydrological explanation has to a police search operation (equally to search and rescue). With hindsight, the desktop study should have preceded the search, allowing better understanding of water conditions. The ultimate reason for lacustrine flow in this location is still not proven, but the hydrological model explained the problems encountered in the initial search.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The burial of objects (human remains, explosives, weapons) below or behind concrete, brick, plaster or tiling may be associated with serious crime and are difficult locations to search. These are quite common forensic search scenarios but little has been published on them to-date. Most documented discoveries are accidental or from suspect/witness testimony. The problem in locating such hidden objects means a random or chance-based approach is not advisable. A preliminary strategy is presented here, based on previous studies, augmented by primary research where new technology or applications are required. This blend allows a rudimentary search workflow, from remote desktop study, to non-destructive investigation through to recommendations as to how the above may inform excavation, demonstrated here with a case study from a homicide investigation. Published case studies on the search for human remains demonstrate the problems encountered when trying to find and recover sealed-in and sealed over locations. Established methods include desktop study, photography, geophysics and search dogs:these are integrated with new technology (LiDAR and laser scanning; photographic rectification; close quarter aerial imagery; ground-penetrating radar on walls and gamma-ray/neutron activation radiography) to propose this possible search strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the results of a measurement campaign aimed at characterizing and modeling the indoor radio channel between two hypothetical cellular handsets. The device-to-device channel measurements were made at 868 MHz and investigated a number of different everyday scenarios such as the devices being held at the user's heads, placed in a pocket and one of the devices placed on a desktop. The recently proposed shadowed k-μ fading model was used to characterize these channels and was shown to provide a good description of the measured data. It was also evident from the experiments, that the device-to-device communications channel is susceptible to shadowing caused by the human body.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The principle feature in the evolution of the internet has been its ever growing reach to include old and young, rich and poor. The internet’s ever encroaching presence has transported it from our desktop to our pocket and into our glasses. This is illustrated in the Internet Society Questionnaire on Multistakeholder Governance, which found the main factors affecting change in the Internet governance landscape were more users online from more countries and the influence of the internet over daily life. The omnipresence of the internet is self- perpetuating; its usefulness grows with every new user and every new piece of data uploaded. The advent of social media and the creation of a virtual presence for each of us, even when we are not physically present or ‘logged on’, means we are fast approaching the point where we are all connected, to everyone else, all the time. We have moved far beyond the point where governments can claim to represent our views which evolve constantly rather than being measured in electoral cycles.
The shift, which has seen citizens as creators of content rather than consumers of it, has undermined the centralist view of democracy and created an environment of wiki democracy or crowd sourced democracy. This is at the heart of what is generally known as Web 2.0, and widely considered to be a positive, democratising force. However, we argue, there are worrying elements here too. Government does not always deliver on the promise of the networked society as it involves citizens and others in the process of government. Also a number of key internet companies have emerged as powerful intermediaries harnessing the efforts of the many, and re- using and re-selling the products and data of content providers in the Web 2.0 environment. A discourse about openness and transparency has been offered as a democratising rationale but much of this masks an uneven relationship where the value of online activity flows not to the creators of content but to those who own the channels of communication and the metadata that they produce.
In this context the state is just one stakeholder in the mix of influencers and opinion formers impacting on our behaviours, and indeed our ideas of what is public. The question of what it means to create or own something, and how all these new relationships to be ordered and governed are subject to fundamental change. While government can often appear slow, unwieldy and even irrelevant in much of this context, there remains a need for some sort of political control to deal with the challenges that technology creates but cannot by itself control. In order for the internet to continue to evolve successfully both technically and socially it is critical that the multistakeholder nature of internet governance be understood and acknowledged, and perhaps to an extent, re- balanced. Stakeholders can no longer be classified in the broad headings of government, private sector and civil society, and their roles seen as some sort of benign and open co-production. Each user of the internet has a stake in its efficacy and each by their presence and participation is contributing to the experience, positive or negative of other users as well as to the commercial success or otherwise of various online service providers. However stakeholders have neither an equal role nor an equal share. The unequal relationship between the providers of content and those who simple package up and transmit that content - while harvesting the valuable data thus produced - needs to be addressed. Arguably this suggests a role for government that involves it moving beyond simply celebrating and facilitating the on- going technological revolution. This paper reviews the shifting landscape of stakeholders and their contribution to the efficacy of the internet. It will look to critically evaluate the primacy of the individual as the key stakeholder and their supposed developing empowerment within the ever growing sea of data. It also looks at the role of individuals in wider governance roles. Governments in a number of jurisdictions have sought to engage, consult or empower citizens through technology but in general these attempts have had little appeal. Citizens have been too busy engaging, consulting and empowering each other to pay much attention to what their governments are up to. George Orwell’s view of the future has not come to pass; in fact the internet has insured the opposite scenario has come to pass. There is no big brother but we are all looking over each other’s shoulder all the time, while at the same time a number of big corporations are capturing and selling all this collective endeavour back to us.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Burial grounds are commonly surveyed and searched by both police/humanitarian search teams and archaeologists.
One aspect of an efficient search is to establish areas free of recent internments to allow the concentration of assets in suspect
terrain. While 100% surety in locating remains can never be achieved, the deployment of a red, amber green (RAG) system for
assessment has proven invaluable to our surveys. The RAG system is based on a desktop study (including burial ground
records), visual inspection (mounding, collapses) and use of geophysics (in this case, ground penetrating radar or GPR) for a
multi-proxy assessment that provides search authorities an assessment of the state of inhumations and a level of legal backup
for decisions they make on excavation or not (‘exit strategy’). The system is flexible and will be built upon as research
continues.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Copney Stone Circle Complex, Co. Tyrone, N. Ireland, is an important Bronze Age site forming part of the Mid-Ulster Stone Circle Complex. The Environment Service: Historic Monuments and Buildings (ESHMB) initiated a program of bog-clearance in August 1994 to excavate the stone circles. This work was completed by October 1994 and the excavated site was surveyed in August 1995. Almost immediately, the rate at which the stones forming the circles were breaking down was noted and a program of study initiated to make recommendations upon the conservation of this important site. Digital photogrammetric techniques were applied to aerial images of the stone circles and digital terrain models created from the images at a range of scales. These provide base data sets for comparison with identical surveys to be completed in successive years and will allow the rate of deterioration, and the areas most affected, of the circles to be determined. In addition, a 2D analysis of the stones provides an accurate analysis of the absolute 2D dimensions of the stones for rapid desktop computer analysis by researchers remote from the digital photogrammetric workstation used in the survey.

The products of this work are readily incorporated into web sites, educational packages and databases. The technique provides a rapid and user friendly method of presentation of a large body of information and measurements, and a reliable method of storage of the information from Copney should it become necessary to re-cover the site.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Power, and consequently energy, has recently attained first-class system resource status, on par with conventional metrics such as CPU time. To reduce energy consumption, many hardware- and OS-level solutions have been investigated. However, application-level information - which can provide the system with valuable insights unattainable otherwise - was only considered in a handful of cases. We introduce OpenMPE, an extension to OpenMP designed for power management. OpenMP is the de-facto standard for programming parallel shared memory systems, but does not yet provide any support for power control. Our extension exposes (i) per-region multi-objective optimization hints and (ii) application-level adaptation parameters, in order to create energy-saving opportunities for the whole system stack. We have implemented OpenMPE support in a compiler and runtime system, and empirically evaluated its performance on two architectures, mobile and desktop. Our results demonstrate the effectiveness of OpenMPE with geometric mean energy savings across 9 use cases of 15 % while maintaining full quality of service.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The continued use of traditional lecturing across Higher Education as the main teaching and learning approach in many disciplines must be challenged. An increasing number of studies suggest that this approach, compared to more active learning methods, is the least effective. In counterargument, the use of traditional lectures are often justified as necessary given a large student population. By analysing the implementation of a web based broadcasting approach which replaced the traditional lecture within a programming-based module, and thereby removed the student population rationale, it was hoped that the student learning experience would become more active and ultimately enhance learning on the module. The implemented model replaces the traditional approach of students attending an on-campus lecture theatre with a web-based live broadcast approach that focuses on students being active learners rather than passive recipients. Students ‘attend’ by viewing a live broadcast of the lecturer, presented as a talking head, and the lecturer’s desktop, via a web browser. Video and audio communication is primarily from tutor to students, with text-based comments used to provide communication from students to tutor. This approach promotes active learning by allowing student to perform activities on their own computer rather than the passive viewing and listening common encountered in large lecture classes. By analysing this approach over two years (n = 234 students) results indicate that 89.6% of students rated the approach as offering a highly positive learning experience. Comparing student performance across three academic years also indicates a positive change. A small data analytic analysis was conducted into student participation levels and suggests that the student cohort's willingness to engage with the broadcast lectures material is high.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

FPGAs and GPUs are often used when real-time performance in video processing is required. An accelerated processor is chosen based on task-specific priorities (power consumption, processing time and detection accuracy), and this decision is normally made once at design time. All three characteristics are important, particularly in battery-powered systems. Here we propose a method for moving selection of processing platform from a single design-time choice to a continuous run time one.We implement Histogram of Oriented Gradients (HOG) detectors for cars and people and Mixture of Gaussians (MoG) motion detectors running across FPGA, GPU and CPU in a heterogeneous system. We use this to detect illegally parked vehicles in urban scenes. Power, time and accuracy information for each detector is characterised. An anomaly measure is assigned to each detected object based on its trajectory and location, when compared to learned contextual movement patterns. This drives processor and implementation selection, so that scenes with high behavioural anomalies are processed with faster but more power hungry implementations, but routine or static time periods are processed with power-optimised, less accurate, slower versions. Real-time performance is evaluated on video datasets including i-LIDS. Compared to power-optimised static selection, automatic dynamic implementation mapping is 10% more accurate but draws 12W extra power in our testbed desktop system.