391 resultados para Interactive Techniques
Resumo:
Prevention and safety promotion programmes. Traditionally, in-depth investigations of crash risks are conducted using exposure controlled study or case-control methodology. However, these studies need either observational data for control cases or exogenous exposure data like vehicle-kilometres travel, entry flow or product of conflicting flow for a particular traffic location, or a traffic site. These data are not readily available and often require extensive data collection effort on a system-wide basis. Aim: The objective of this research is to propose an alternative methodology to investigate crash risks of a road user group in different circumstances using readily available traffic police crash data. Methods: This study employs a combination of a log-linear model and the quasi-induced exposure technique to estimate crash risks of a road user group. While the log-linear model reveals the significant interactions and thus the prevalence of crashes of a road user group under various sets of traffic, environmental and roadway factors, the quasi-induced exposure technique estimates relative exposure of that road user in the same set of explanatory variables. Therefore, the combination of these two techniques provides relative measures of crash risks under various influences of roadway, environmental and traffic conditions. The proposed methodology has been illustrated using Brisbane motorcycle crash data of five years. Results: Interpretations of results on different combination of interactive factors show that the poor conspicuity of motorcycles is a predominant cause of motorcycle crashes. Inability of other drivers to correctly judge the speed and distance of an oncoming motorcyclist is also evident in right-of-way violation motorcycle crashes at intersections. Discussion and Conclusions: The combination of a log-linear model and the induced exposure technique is a promising methodology and can be applied to better estimate crash risks of other road users. This study also highlights the importance of considering interaction effects to better understand hazardous situations. A further study on the comparison between the proposed methodology and case-control method would be useful.
Resumo:
Acoustic emission (AE) analysis is one of the several diagnostic techniques available nowadays for structural health monitoring (SHM) of engineering structures. Some of its advantages over other techniques include high sensitivity to crack growth and capability of monitoring a structure in real time. The phenomenon of rapid release of energy within a material by crack initiation or growth in form of stress waves is known as acoustic emission (AE). In AE technique, these stress waves are recorded by means of suitable sensors placed on the surface of a structure. Recorded signals are subsequently analysed to gather information about the nature of the source. By enabling early detection of crack growth, AE technique helps in planning timely retrofitting or other maintenance jobs or even replacement of the structure if required. In spite of being a promising tool, some challenges do still exist behind the successful application of AE technique. Large amount of data is generated during AE testing, hence effective data analysis is necessary, especially for long term monitoring uses. Appropriate analysis of AE data for quantification of damage level is an area that has received considerable attention. Various approaches available for damage quantification for severity assessment are discussed in this paper, with special focus on civil infrastructure such as bridges. One method called improved b-value analysis is used to analyse data collected from laboratory testing.
Resumo:
In the medical and healthcare arena, patients‟ data is not just their own personal history but also a valuable large dataset for finding solutions for diseases. While electronic medical records are becoming popular and are used in healthcare work places like hospitals, as well as insurance companies, and by major stakeholders such as physicians and their patients, the accessibility of such information should be dealt with in a way that preserves privacy and security. Thus, finding the best way to keep the data secure has become an important issue in the area of database security. Sensitive medical data should be encrypted in databases. There are many encryption/ decryption techniques and algorithms with regard to preserving privacy and security. Currently their performance is an important factor while the medical data is being managed in databases. Another important factor is that the stakeholders should decide more cost-effective ways to reduce the total cost of ownership. As an alternative, DAS (Data as Service) is a popular outsourcing model to satisfy the cost-effectiveness but it takes a consideration that the encryption/ decryption modules needs to be handled by trustworthy stakeholders. This research project is focusing on the query response times in a DAS model (AES-DAS) and analyses the comparison between the outsourcing model and the in-house model which incorporates Microsoft built-in encryption scheme in a SQL Server. This research project includes building a prototype of medical database schemas. There are 2 types of simulations to carry out the project. The first stage includes 6 databases in order to carry out simulations to measure the performance between plain-text, Microsoft built-in encryption and AES-DAS (Data as Service). Particularly, the AES-DAS incorporates implementations of symmetric key encryption such as AES (Advanced Encryption Standard) and a Bucket indexing processor using Bloom filter. The results are categorised such as character type, numeric type, range queries, range queries using Bucket Index and aggregate queries. The second stage takes the scalability test from 5K to 2560K records. The main result of these simulations is that particularly as an outsourcing model, AES-DAS using the Bucket index shows around 3.32 times faster than a normal AES-DAS under the 70 partitions and 10K record-sized databases. Retrieving Numeric typed data takes shorter time than Character typed data in AES-DAS. The aggregation query response time in AES-DAS is not as consistent as that in MS built-in encryption scheme. The scalability test shows that the DBMS reaches in a certain threshold; the query response time becomes rapidly slower. However, there is more to investigate in order to bring about other outcomes and to construct a secured EMR (Electronic Medical Record) more efficiently from these simulations.
Resumo:
Local governments struggle to engage time poor and seemingly apathetic citizens, as well as the city's young digital natives, the digital locals. Capturing the attention of this digitally literate community who are technology and socially savvy adds a new quality to the challenge of community engagement for urban planning. This project developed and tested a lightweight design intervention towards removing the hierarchy between those who plan the city and those who use it. The aim is to narrow this gap by enhancing people's experience of physical spaces with digital, civic technologies that are directly accessible within that space. The study's research informed the development of a public screen system called Discussions In Space (DIS). It facilitates a feedback platform about specific topics, e.g., a concrete urban planning project, and encourages direct, in-situ, real-time user responses via SMS and Twitter. The thesis presents the findings of deploying and integrating DIS in a wide range of public and urban environments, including the iconic urban screen at Federation Square in Melbourne, to explore the Human-Computer Interaction (HCI) related challenges and implications. It was also deployed in conjunction with a major urban planning project in Brisbane to explore the system's opportunities and challenges of better engaging with Australia's new digital locals. Finally, the merits of the short-texted and ephemeral data generated by the system were evaluated in three focus groups with professional urban planners. DIS offers additional benefits for civic participation as it gives voice to residents who otherwise would not be easily heard. It also promotes a positive attitude towards local governments and gathers complementary information that is different than that captured by more traditional public engagement tools.
Resumo:
We report and reflect upon the early stages of a research project that endeavours to establish a culture of critical design thinking in a tertiary game design course. We first discuss the current state of the Australian game industry and consider some perceived issues in game design courses and graduate outcomes. The second sec-tion presents our response to these issues: a project in progress which uses techniques originally exploited by Augusto Boal in his work, Theatre of the Oppressed. We appropriate Boal’s method to promote critical design thinking in a games design class. Finally, we reflect on the project and the ontology of design thinking from the perspective of Bruce Archer’s call to reframe design as a ‘third academic art’.
Resumo:
With the goal of improving the academic performance of primary and secondary students in Malaysia by 2020, the Malaysian Ministry of Education has made a significant investment in developing a Smart School Project. The aim of this project is to introduce interactive courseware into primary and secondary schools across Malaysia. As has been the case around the world, interactive courseware is regarded as a tool to motivate students to learn meaningfully and enhance learning experiences. Through an initial pilot phase, the Malaysian government has commissioned the development of interactive courseware by a number of developers and has rolled this courseware out to selected schools over the past 12 years. However, Ministry reports and several independent researchers have concluded that its uptake has been limited, and that much of the courseware has not been used effectively in schools. This has been attributed to weaknesses in the interface design of the courseware, which, it has been argued, fails to accommodate the needs of students and teachers. Taking the Smart School Project's science courseware as a sample, this research project has investigated the extent, nature, and reasons for the problems that have arisen. In particular, it has focused on examining the quality and effectivity of the interface design in facilitating interaction and supporting learning experiences. The analysis has been conducted empirically, by first comparing the interface design principles, characteristics and components of the existing courseware against best practice, as described in the international literature, as well as against the government guidelines provided to the developers. An ethnographic study was then undertaken to observe how the courseware is used and received in the classroom, and to investigate the stakeholders' (school principal, teachers and students') perceptions of its usability and effectivity. Finally, to understand how issues may have arisen, a review of the development process has been undertaken and it has been compared to development methods recommended in the literature, as well as the guidelines provided to the developers. The outcomes of the project include an empirical evaluation of the quality of the interface design of the Smart School Project's science courseware; the identification of other issues that have affected its uptake; an evaluation of the development process and, out of this, an extended set of principles to guide the design and development of future Smart School Project courseware to ensure that it accommodates the various stakeholders' needs.
Resumo:
This paper investigates the use of the dimensionality-reduction techniques weighted linear discriminant analysis (WLDA), and weighted median fisher discriminant analysis (WMFD), before probabilistic linear discriminant analysis (PLDA) modeling for the purpose of improving speaker verification performance in the presence of high inter-session variability. Recently it was shown that WLDA techniques can provide improvement over traditional linear discriminant analysis (LDA) for channel compensation in i-vector based speaker verification systems. We show in this paper that the speaker discriminative information that is available in the distance between pair of speakers clustered in the development i-vector space can also be exploited in heavy-tailed PLDA modeling by using the weighted discriminant approaches prior to PLDA modeling. Based upon the results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset, we believe that WLDA and WMFD projections before PLDA modeling can provide an improved approach when compared to uncompensated PLDA modeling for i-vector based speaker verification systems.
Resumo:
Navigational collisions are one of the major safety concerns for many seaports. Continuing growth of shipping traffic in number and sizes is likely to result in increased number of traffic movements, which consequently could result higher risk of collisions in these restricted waters. This continually increasing safety concern warrants a comprehensive technique for modeling collision risk in port waters, particularly for modeling the probability of collision events and the associated consequences (i.e., injuries and fatalities). A number of techniques have been utilized for modeling the risk qualitatively, semi-quantitatively and quantitatively. These traditional techniques mostly rely on historical collision data, often in conjunction with expert judgments. However, these techniques are hampered by several shortcomings, such as randomness and rarity of collision occurrence leading to obtaining insufficient number of collision counts for a sound statistical analysis, insufficiency in explaining collision causation, and reactive approach to safety. A promising alternative approach that overcomes these shortcomings is the navigational traffic conflict technique (NTCT), which uses traffic conflicts as an alternative to the collisions for modeling the probability of collision events quantitatively. This article explores the existing techniques for modeling collision risk in port waters. In particular, it identifies the advantages and limitations of the traditional techniques and highlights the potentials of the NTCT in overcoming the limitations. In view of the principles of the NTCT, a structured method for managing collision risk is proposed. This risk management method allows safety analysts to diagnose safety deficiencies in a proactive manner, which consequently has great potential for managing collision risk in a fast, reliable and efficient manner.
Resumo:
This chapter reviews common barriers to community engagement for Latino youth and suggests ways to move beyond those barriers by empowering them to communicate their experiences, address the challenges they face, and develop recommendations for making their community more youth-friendly. As a case study, this chapter describes a program called Youth FACE IT (Youth Fostering Active Community Engagement for Integration and Transformation)in Boulder County, Colorado. The program enables Latino youth to engage in critical dialogue and participate in a community-based initiative. The chapter concludes by explaining specific strategies that planners can use to support active community engagement and develop a future generation of planners and engaged community members that reflects emerging demographics.
Resumo:
In this paper we consider the variable order time fractional diffusion equation. We adopt the Coimbra variable order (VO) time fractional operator, which defines a consistent method for VO differentiation of physical variables. The Coimbra variable order fractional operator also can be viewed as a Caputo-type definition. Although this definition is the most appropriate definition having fundamental characteristics that are desirable for physical modeling, numerical methods for fractional partial differential equations using this definition have not yet appeared in the literature. Here an approximate scheme is first proposed. The stability, convergence and solvability of this numerical scheme are discussed via the technique of Fourier analysis. Numerical examples are provided to show that the numerical method is computationally efficient. Crown Copyright © 2012 Published by Elsevier Inc. All rights reserved.
Resumo:
The research team recognized the value of network-level Falling Weight Deflectometer (FWD) testing to evaluate the structural condition trends of flexible pavements. However, practical limitations due to the cost of testing, traffic control and safety concerns and the ability to test a large network may discourage some agencies from conducting the network-level FWD testing. For this reason, the surrogate measure of the Structural Condition Index (SCI) is suggested for use. The main purpose of the research presented in this paper is to investigate data mining strategies and to develop a prediction method of the structural condition trends for network-level applications which does not require FWD testing. The research team first evaluated the existing and historical pavement condition, distress, ride, traffic and other data attributes in the Texas Department of Transportation (TxDOT) Pavement Maintenance Information System (PMIS), applied data mining strategies to the data, discovered useful patterns and knowledge for SCI value prediction, and finally provided a reasonable measure of pavement structural condition which is correlated to the SCI. To evaluate the performance of the developed prediction approach, a case study was conducted using the SCI data calculated from the FWD data collected on flexible pavements over a 5-year period (2005 – 09) from 354 PMIS sections representing 37 pavement sections on the Texas highway system. The preliminary study results showed that the proposed approach can be used as a supportive pavement structural index in the event when FWD deflection data is not available and help pavement managers identify the timing and appropriate treatment level of preventive maintenance activities.
Resumo:
In the context of increasing demand for potable water and the depletion of water resources, stormwater is a logical alternative. However, stormwater contains pollutants, among which metals are of particular interest due to their toxicity and persistence in the environment. Hence, it is imperative to remove toxic metals in stormwater to the levels prescribed by drinking water guidelines for potable use. Consequently, various techniques have been proposed, among which sorption using low cost sorbents is economically viable and environmentally benign in comparison to other techniques. However, sorbents show affinity towards certain toxic metals, which results in poor removal of other toxic metals. It was hypothesised in this study that a mixture of sorbents that have different metal affinity patterns can be used for the efficient removal of a range of toxic metals commonly found in stormwater. The performance of six sorbents in the sorption of Al, Cr, Cu, Pb, Ni, Zn and Cd, which are the toxic metals commonly found in urban stormwater, was investigated to select suitable sorbents for creating the mixtures. For this purpose, a multi criteria analytical protocol was developed using the decision making methods: PROMETHEE (Preference Ranking Organisation METHod for Enrichment Evaluations) and GAIA (Graphical Analysis for Interactive Assistance). Zeolite and seaweed were selected for the creation of trial mixtures based on their metal affinity pattern and the performance on predetermined selection criteria. The metal sorption mechanisms employed by seaweed and zeolite were defined using kinetics, isotherm and thermodynamics parameters, which were determined using the batch sorption experiments. Additionally, the kinetics rate-limiting steps were identified using an innovative approach using GAIA and Spearman correlation techniques developed as part of the study, to overcome the limitation in conventional graphical methods in predicting the degree of contribution of each kinetics step in limiting the overall metal removal rate. The sorption kinetics of zeolite was found to be primarily limited by intraparticle diffusion followed by the sorption reaction steps, which were governed mainly by the hydrated ionic diameter of metals. The isotherm study indicated that the metal sorption mechanism of zeolite was primarily of a physical nature. The thermodynamics study confirmed that the energetically favourable nature of sorption increased in the order of Zn < Cu < Cd < Ni < Pb < Cr < Al, which is in agreement with metal sorption affinity of zeolite. Hence, sorption thermodynamics has an influence on the metal sorption affinity of zeolite. On the other hand, the primary kinetics rate-limiting step of seaweed was the sorption reaction process followed by intraparticle diffusion. The boundary layer diffusion was also found to limit the metal sorption kinetics at low concentration. According to the sorption isotherm study, Cd, Pb, Cr and Al were sorbed by seaweed via ion exchange, whilst sorption of Ni occurred via physisorption. Furthermore, ionic bonding is responsible for the sorption of Zn. The thermodynamics study confirmed that sorption by seaweed was energetically favourable in the order of Zn < Cu < Cd < Cr . Al < Pb < Ni. However, this did not agree with the affinity series derived for seaweed suggesting a limited influence of sorption thermodynamics on metal affinity for seaweed. The investigation of zeolite-seaweed mixtures indicated that mixing sorbents have an effect on the kinetics rates and the sorption affinity. Additionally, the theoretical relationships were derived to predict the boundary layer diffusion rate, intraparticle diffusion rate, the sorption reaction rate and the enthalpy of mixtures based on that of individual sorbents. In general, low coefficient of determination (R2) for the relationships between theoretical and experimental data indicated that the relationships were not statistically significant. This was attributed to the heterogeneity of the properties of sorbents. Nevertheless, in relative terms, the intraparticle diffusion rate, sorption reaction rate and enthalpy of sorption had higher R2 values than the boundary layer diffusion rate suggesting that there was some relationship between the former set of parameters of mixtures and that of sorbents. The mixture, which contained 80% of zeolite and 20% of seaweed, showed similar affinity for the sorption of Cu, Ni, Cd, Cr and Al, which was attributed to approximately similar sorption enthalpy of the metal ions. Therefore, it was concluded that the seaweed-zeolite mixture can be used to obtain the same affinity for various metals present in a multi metal system provided the metal ions have similar enthalpy during sorption by the mixture.
Resumo:
Digital information that is place- and time-specific, is increasingly becoming available on all aspects of the urban landscape. People (cf. the Social Web), places (cf. the Geo Web), and physical objects (cf. ubiquitous computing, the Internet of Things) are increasingly infused with sensors, actuators, and tagged with a wealth of digital information. Urban informatics research explores these emerging digital layers of the city at the intersection of people, place and technology. However, little is known about the challenges and new opportunities that these digital layers may offer to road users driving through today’s mega cities. We argue that this aspect is worth exploring in particular with regards to Auto-UI’s overarching goal of making cars both safer and more enjoyable. This paper presents the findings of a pilot study, which included 14 urban informatics research experts participating in a guided ideation (idea creation) workshop within a simulated environment. They were immersed into different driving scenarios to imagine novel urban informatics type of applications specific to the driving context.