982 resultados para capability data


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Understanding users' capabilities, needs and expectations is key to the domain of Inclusive Design. Much of the work in the field could be informed and further strengthened by clear, valid and representative data covering the full range of people's capabilities. This article reviews existing data sets and identifies the challenges inherent in measuring capability in a manner that is informative for work in Inclusive Design. The need for a design-relevant capability data set is identified and consideration is given to a variety of capability construct operationalisation issues including questions associated with self-report and performance measures, sampling and the appropriate granularity of measures. The need for further experimental work is identified and a programme of research designed to culminate in the design of a valid and reliable capability survey is described.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Successful inclusive product design requires knowledge about the capabilities, needs and aspirations of potential users and should cater for the different scenarios in which people will use products, systems and services. This should include: the individual at home; in the workplace; for businesses, and for products in these contexts. It needs to reflect the development of theory, tools and techniques as research moves on.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The wavelet packet transform decomposes a signal into a set of bases for time–frequency analysis. This decomposition creates an opportunity for implementing distributed data mining where features are extracted from different wavelet packet bases and served as feature vectors for applications. This paper presents a novel approach for integrated machine fault diagnosis based on localised wavelet packet bases of vibration signals. The best basis is firstly determined according to its classification capability. Data mining is then applied to extract features and local decisions are drawn using Bayesian inference. A final conclusion is reached using a weighted average method in data fusion. A case study on rolling element bearing diagnosis shows that this approach can greatly improve the accuracy ofdiagno sis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In order to develop more inclusive products and services, designers need a means of assessing the inclusivity of existing products and new concepts. Following previous research on the development of scales for inclusive design at University of Cambridge, Engineering Design Centre (EDC) [1], this paper presents the latest version of the exclusion audit method. For a specific product interaction, this estimates the proportion of the Great British population who would be excluded from using a product or service, due to the demands the product places on key user capabilities. A critical part of the method involves rating of the level of demand placed by a task on a range of key user capabilities, so the procedure to perform this assessment was operationalised and then its reliability was tested with 31 participants. There was no evidence that participants rated the same demands consistently. The qualitative results from the experiment suggest that the consistency of participants’ demand level ratings could be significantly improved if the audit materials and their instructions better guided the participant through the judgement process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Current older adult capability data-sets fail to account for the effects of everyday environmental conditions on capability. This article details a study that investigates the effects of everyday ambient illumination conditions (overcast, 6000 lx; in-house lighting, 150 lx and street lighting, 7.5 lx) and contrast (90%, 70%, 50% and 30%) on the near visual acuity (VA) of older adults (n= 38, 65-87 years). VA was measured at a 1-m viewing distance using logarithm of minimum angle of resolution (LogMAR) acuity charts. Results from the study showed that for all contrast levels tested, VA decreased by 0.2 log units between the overcast and street lighting conditions. On average, in overcast conditions, participants could detect detail around 1.6 times smaller on the LogMAR charts compared with street lighting. VA also significantly decreased when contrast was reduced from 70% to 50%, and from 50% to 30% in each of the ambient illumination conditions. Practitioner summary: This article presents an experimental study that investigates the impact of everyday ambient illumination levels and contrast on older adults' VA. Results show that both factors have a significant effect on their VA. Findings suggest that environmental conditions need to be accounted for in older adult capability data-sets/designs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The global business environment is witnessing tough times, and this situation has significant implications on how organizations manage their processes and resources. Accounting information system (AIS) plays a critical role in this situation to ensure appropriate processing of financial transactions and availability to relevant information for decision-making. We suggest the need for a dynamic AIS environment for today’s turbulent business environment. This environment is possible with a dynamic AIS, complementary business intelligence systems, and technical human capability. Data collected through a field survey suggests that the dynamic AIS environment contributes to an organization’s accounting functions of processing transactions, providing information for decision making, and ensuring an appropriate control environment. These accounting processes contribute to the firm-level performance of the organization. From these outcomes, one can infer that a dynamic AIS environment contributes to organizational performance in today’s challenging business environment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A Powerpoint presentation on increasing research data management capability within your university, presented from the university library perspective, and focusing on collaborations with university partners to develop and implement university wide data management services and infrastructure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Part 14: Interoperability and Integration

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The construction industry has adapted information technology in its processes in terms of computer aided design and drafting, construction documentation and maintenance. The data generated within the construction industry has become increasingly overwhelming. Data mining is a sophisticated data search capability that uses classification algorithms to discover patterns and correlations within a large volume of data. This paper presents the selection and application of data mining techniques on maintenance data of buildings. The results of applying such techniques and potential benefits of utilising their results to identify useful patterns of knowledge and correlations to support decision making of improving the management of building life cycle are presented and discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Real-Time Kinematic (RTK) positioning is a technique used to provide precise positioning services at centimetre accuracy level in the context of Global Navigation Satellite Systems (GNSS). While a Network-based RTK (N-RTK) system involves multiple continuously operating reference stations (CORS), the simplest form of a NRTK system is a single-base RTK. In Australia there are several NRTK services operating in different states and over 1000 single-base RTK systems to support precise positioning applications for surveying, mining, agriculture, and civil construction in regional areas. Additionally, future generation GNSS constellations, including modernised GPS, Galileo, GLONASS, and Compass, with multiple frequencies have been either developed or will become fully operational in the next decade. A trend of future development of RTK systems is to make use of various isolated operating network and single-base RTK systems and multiple GNSS constellations for extended service coverage and improved performance. Several computational challenges have been identified for future NRTK services including: • Multiple GNSS constellations and multiple frequencies • Large scale, wide area NRTK services with a network of networks • Complex computation algorithms and processes • Greater part of positioning processes shifting from user end to network centre with the ability to cope with hundreds of simultaneous users’ requests (reverse RTK) There are two major requirements for NRTK data processing based on the four challenges faced by future NRTK systems, expandable computing power and scalable data sharing/transferring capability. This research explores new approaches to address these future NRTK challenges and requirements using the Grid Computing facility, in particular for large data processing burdens and complex computation algorithms. A Grid Computing based NRTK framework is proposed in this research, which is a layered framework consisting of: 1) Client layer with the form of Grid portal; 2) Service layer; 3) Execution layer. The user’s request is passed through these layers, and scheduled to different Grid nodes in the network infrastructure. A proof-of-concept demonstration for the proposed framework is performed in a five-node Grid environment at QUT and also Grid Australia. The Networked Transport of RTCM via Internet Protocol (Ntrip) open source software is adopted to download real-time RTCM data from multiple reference stations through the Internet, followed by job scheduling and simplified RTK computing. The system performance has been analysed and the results have preliminarily demonstrated the concepts and functionality of the new NRTK framework based on Grid Computing, whilst some aspects of the performance of the system are yet to be improved in future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Road safety is a major concern worldwide. Road safety will improve as road conditions and their effects on crashes are continually investigated. This paper proposes to use the capability of data mining to include the greater set of road variables for all available crashes with skid resistance values across the Queensland state main road network in order to understand the relationships among crash, traffic and road variables. This paper presents a data mining based methodology for the road asset management data to find out the various road properties that contribute unduly to crashes. The models demonstrate high levels of accuracy in predicting crashes in roads when various road properties are included. This paper presents the findings of these models to show the relationships among skid resistance, crashes, crash characteristics and other road characteristics such as seal type, seal age, road type, texture depth, lane count, pavement width, rutting, speed limit, traffic rates intersections, traffic signage and road design and so on.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend toward deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.