851 resultados para user study
Resumo:
Objective The main aim of this study was to identify young drivers' underlying beliefs (i.e., behavioral, normative, and control) regarding initiating, monitoring/reading, and responding to social interactive technology (i.e., functions on a Smartphone that allow the user to communicate with other people). Method This qualitative study was a beliefs elicitation study in accordance with the Theory of Planned Behavior and sought to elicit young drivers' behavioral (i.e., advantages, disadvantages), normative (i.e., who approves, who disapproves), and control beliefs (i.e., barriers, facilitators) which underpin social interactive technology use while driving. Young drivers (N = 26) aged 17 to 25 years took part in an interview or focus group discussion. Results While differences emerged between the three behaviors of initiating, monitoring/reading, and responding for each of the behavioral, normative, and control belief categories, the strongest distinction was within the behavioral beliefs category (e.g., communicating with the person that they were on the way to meet was an advantage of initiating; being able to determine whether to respond was an advantage of monitoring/reading; and communicating with important people was an advantage of responding). Normative beliefs were similar for initiating and responding behaviors (e.g., friends and peers more likely to approve than other groups) and differences emerged for monitoring/reading (e.g., parents were more likely to approve of this behavior than initiating and responding). For control beliefs, there were differences between the beliefs regarding facilitators of these behaviors (e.g., familiar roads and conditions facilitated initiating; having audible notifications of an incoming communication facilitated monitoring/reading; and receiving a communication of immediate importance facilitated responding); however, the control beliefs that presented barriers were consistent across the three behaviors (e.g., difficult traffic/road conditions). Conclusion The current study provides an important addition to the extant literature and supports emerging research which suggests initiating, monitoring/reading, and responding may indeed be distinct behaviors with different underlying motivations.
Resumo:
Purpose Traditional construction planning relies upon the critical path method (CPM) and bar charts. Both of these methods suffer from visualization and timing issues that could be addressed by 4D technology specifically geared to meet the needs of the construction industry. This paper proposed a new construction planning approach based on simulation by using a game engine. Design/methodology/approach A 4D automatic simulation tool was developed and a case study was carried out. The proposed tool was used to simulate and optimize the plans for the installation of a temporary platform for piling in a civil construction project in Hong Kong. The tool simulated the result of the construction process with three variables: 1) equipment, 2) site layout and 3) schedule. Through this, the construction team was able to repeatedly simulate a range of options. Findings The results indicate that the proposed approach can provide a user-friendly 4D simulation platform for the construction industry. The simulation can also identify the solution being sought by the construction team. The paper also identifies directions for further development of the 4D technology as an aid in construction planning and decision-making. Research limitations/implications The tests on the tool are limited to a single case study and further research is needed to test the use of game engines for construction planning in different construction projects to verify its effectiveness. Future research could also explore the use of alternative game engines and compare their performance and results. Originality/value The authors proposed the use of game engine to simulate the construction process based on resources, working space and construction schedule. The developed tool can be used by end-users without simulation experience.
Resumo:
Modularity has been suggested to be connected to evolvability because a higher degree of independence among parts allows them to evolve as separate units. Recently, the Escoufier RV coefficient has been proposed as a measure of the degree of integration between modules in multivariate morphometric datasets. However, it has been shown, using randomly simulated datasets, that the value of the RV coefficient depends on sample size. Also, so far there is no statistical test for the difference in the RV coefficient between a priori defined groups of observations. Here, we (1), using a rarefaction analysis, show that the value of the RV coefficient depends on sample size also in real geometric morphometric datasets; (2) propose a permutation procedure to test for the difference in the RV coefficient between a priori defined groups of observations; (3) show, through simulations, that such a permutation procedure has an appropriate Type I error; (4) suggest that a rarefaction procedure could be used to obtain sample-size-corrected values of the RV coefficient; and (5) propose a nearest-neighbor procedure that could be used when studying the variation of modularity in geographic space. The approaches outlined here, readily extendable to non-morphometric datasets, allow study of the variation in the degree of integration between a priori defined modules. A Java application – that will allow performance of the proposed test using a software with graphical user interface – has also been developed and is available at the Morphometrics at Stony Brook Web page (http://life.bio.sunysb.edu/morph/).
Resumo:
Two types of welfare states are compared in this article. Differences in procedural rights for young unemployed at the level of service delivery are analyzed. In Australia, rights are regulated through a rigid procedural justice system. The young unemployed within the social assistance system in Sweden encounter staff with high discretionary powers, which makes the legal status weak for the unemployed but, on the other hand, the system is more flexible. Despite the differences, there is striking convergence in how the young unemployed describe how discretionary power among street-level staff affects their procedural rights. This result can be understood as a result of similar professional norms, work customs and occupational cultures of street-level staff, and that there is a basic logic of conditionality in all developed welfare states where procedural rights are tightly coupled with responsibilities.
Resumo:
This thesis is a comparative case study in Japanese video game localization for the video games Sairen, Sairen 2 and Sairen Nyûtoransurêshon, and English-language localized versions of the same games as published in Scandinavia and Australia/New Zealand. All games are developed by Sony Computer Entertainment Inc. and published exclusively for Playstation2 and Playstation3 consoles. The fictional world of the Sairen games draws much influence from Japanese history, as well as from popular and contemporary culture, and in doing so caters mainly to a Japanese audience. For localization, i.e. the adaptation of a product to make it accessible to users outside the original market it was intended for in the first place, this is a challenging issue. Video games are media of entertainment, and therefore localization practice must preserve the games’ effects on the players’ emotions. Further, video games are digital products that are comprised of a multitude of distinct elements, some of which are part of the game world, while others regulate the connection between the player as part of the real world and the game as digital medium. As a result, video game localization is also a practice that has to cope with the technical restrictions that are inherent to the medium. The main theory used throughout the thesis is Anthony Pym’s framework for localization studies that considers the user of the localized product as a defining part of the localization process. This concept presupposes that localization is an adaptation that is performed to make a product better suited for use during a specific reception situation. Pym also addresses the factor that certain products may resist distribution into certain reception situations because of their content, and that certain aspects of localization aim to reduce this resistance through significant alterations of the original product. While Pym developed his ideas with mainly regular software in mind, they can also be adapted well to study video games from a localization angle. Since modern video games are highly complex entities that often switch between interactive and non-interactive modes, Pym’s ideas are adapted throughout the thesis to suit the particular elements being studied. Instances analyzed in this thesis include menu screens, video clips, in-game action and websites. The main research questions focus on how the games’ rules influence localization, and how the games’ fictional domain influences localization. Because there are so many peculiarities inherent to the medium of the video game, other theories are introduced as well to complement the research at hand. These include Lawrence Venuti’s discussions of foreiginizing and domesticating translation methods for literary translation, and Jesper Juul’s definition of games. Additionally, knowledge gathered from interviews with video game localization professionals in Japan during September and October 2009 is also utilized for this study. Apart from answering the aforementioned research questions, one of this thesis’ aims is to enrich the still rather small field of game localization studies, and the study of Japanese video games in particular, one of Japan’s most successful cultural exports.
Resumo:
This paper focuses on the fundamental right to be heard, that is, the right to have one’s voice heard and listened to – to impose reception (Bourdieu, 1977). It focuses on the ways that non-mainstream English is heard and received in Australia, where despite public policy initiatives around equal opportunity, language continues to socially disadvantage people (Burridge & Mulder, 1998). English is the language of the mainstream and most people are monolingually English (Ozolins, 1993). English has no official status yet it remains dominant and its centrality is rarely challenged (Smolicz, 1995). This paper takes the position that the lack of language engagement in mainstream Australia leads to linguistic desensitisation. Writing in the US context where English is also the unofficial norm, Lippi-Green (1997) maintains that discrimination based on speech features or accent is commonly accepted and widely perceived as appropriate. In Australia, non-standard forms of English are often disparaged or devalued because they do not conform to the ‘standard’ (Burridge & Mulder, 1998). This paper argues that talk cannot be taken for granted: ‘spoken voices’ are critical tools for representing the self and negotiating and manifesting legitimacy within social groups (Miller, 2003). In multicultural, multilingual countries like Australia, the impact of the spoken voice, its message and how it is heard are critical tools for people seeking settlement, inclusion and access to facilities and services. Too often these rights are denied because of the way a person sounds. This paper reports a study conducted with a group that has been particularly vulnerable to ongoing ‘panics’ about language – international students. International education is the third largest revenue source for Australia (AEI, 2010) but has been beset by concerns from academics (Auditor-General, 2002) and the media about student language levels and falling work standards (e.g. Livingstone, 2004). Much of the focus has been high-stakes writing but with the ascendancy of project work in university assessment and the increasing emphasis on oracy, there is a call to recognise the salience of talk, especially among students using English as a second language (ESL) (Kettle & May, 2012). The study investigated the experiences of six international students in a Master of Education course at a large metropolitan university. It utilised data from student interviews, classroom observations, course materials, university policy documents and media reports to examine the ways that speaking and being heard impacted on the students’ learning and legitimacy in the course. The analysis drew on Fairclough’s (2003) model of the dialectical-relational Critical Discourse Analysis (CDA) to analyse the linguistic, discursive and social relations between the data texts and their conditions of production and interpretation, including the wider socio-political discourses on English, language difference, and second language use. The interests of the study were if and how discourses of marginalisation and discrimination manifested and if and how students recognised and responded to them pragmatically. Also how they juxtaposed with and/or contradicted the official rhetoric about diversity and inclusion. The underpinning rationale was that international students’ experiences can provide insights into the hidden politics and practices of being heard and afforded speaking rights as a second language speaker in Australia.
Resumo:
Road traffic accidents are a large problem everywhere in the world. However, regional differences in traffic safety between countries are considerable. For example, traffic safety records are much worse in Southern Europe and the Middle East than in Northern and Western Europe. Despite the large regional differences in traffic safety, factors contributing to different accident risk figures in different countries and regions have remained largely unstudied. The general aim of this study was to investigate regional differences in traffic safety between Southern European/Middle Eastern (i.e., Greece, Iran, Turkey) and Northern/Western European (i.e., Finland, Great Britain, The Netherlands) countries and to identify factors related to these differences. We conducted seven sub-studies in which I applied a traffic culture framework, including a multi-level approach, to traffic safety. We used aggregated level data (national statistics), surveys among drivers, and data on traffic accidents and fatalities in the analyses. In the first study, we investigated the influence of macro level factors (i.e., economic, societal, and cultural) on traffic safety across countries. The results showed that a high GNP per capita and conservatism correlated with a low number of traffic fatalities, whereas a high degree of uncertainty avoidance, neuroticism, and egalitarianism correlated with a high number of traffic fatalities. In the second, third, and fourth studies, we examined whether the conceptualisation of road user characteristics (i.e., driver behaviour and performance) varied across traffic cultures and how these factors determined overall safety, and the differences between countries in traffic safety. The results showed that the factorial agreement for driver behaviour (i.e., aggressive driving) and performance (i.e., safety skills) was unsatisfactory in Greece, Iran, and Turkey, where the lack of social tolerance and interpersonal aggressive violations seem to be important characteristics of driving. In addition, we found that driver behaviour (i.e., aggressive violations and errors) mediated the relationship between culture/country and accidents. Besides, drivers from "dangerous" Southern European countries and Iran scored higher on aggressive violations and errors than did drivers from "safe" Northern European countries. However, "speeding" appeared to be a "pan-cultural" problem in traffic. Similarly, aggressive driving seems largely depend on road users' interactions and drivers' interpretation (i.e., cognitive biases) of the behaviour of others in every country involved in the study. Moreover, in all countries, a risky general driving style was mostly related to being young and male. The results of the fifth and sixth studies showed that among young Turkish drivers, gender stereotypes (i.e., masculinity and femininity) greatly influence driver behaviour and performance. Feminine drivers were safety-oriented whereas masculine drivers were skill-oriented and risky drivers. Since everyday driving tasks involve not only erroneous (i.e., risky or dangerous driving) or correct performance (i.e., normal habitual driving), but also "positive" driver behaviours, we developed a reliable scale for measuring "positive" driver behaviours among Turkish drivers in the seventh study. Consequently, I revised Reason's model [Reason, J. T., 1990. Human error. Cambridge University Press: New York] of aberrant driver behaviour to represent a general driving style, including all possible intentional behaviours in traffic while evaluating the differences between countries in traffic safety. The results emphasise the importance of economic, societal and cultural factors, general driving style and skills, which are related to exposure, cognitive biases as well as age, sex, and gender, in differences between countries in traffic safety.
Resumo:
BACKGROUND OR CONTEXT Thermodynamics is a core concept for mechanical engineers yet notoriously difficult. Evidence suggests students struggle to understand and apply the core fundamental concepts of thermodynamics with analysis indicating a problem with student learning/engagement. A contributing factor is that thermodynamics is a ‘science involving concepts based on experiments’ (Mayhew 1990) with subject matter that cannot be completely defined a priori. To succeed, students must engage in a deep-holistic approach while taking ownership of their learning. The difficulty in achieving this often manifests itself in students ‘not getting’ the principles and declaring thermodynamics ‘hard’. PURPOSE OR GOAL Traditionally, students practice and “learn” the application of thermodynamics in their tutorials, however these do not consider prior conceptions (Holman & Pilling 2004). As ‘hands on’ learning is the desired outcome of tutorials it is pertinent to study methods of improving their efficacy. Within the Australian context, the format of thermodynamics tutorials has remained relatively unchanged over the decades, relying anecdotally on a primarily didactic pedagogical approach. Such approaches are not conducive to deep learning (Ramsden 2003) with students often disengaged from the learning process. Evidence suggests (Haglund & Jeppsson 2012), however, that a deeper level and ownership of learning can be achieved using a more constructivist approach for example through self generated analogies. This pilot study aimed to collect data to support the hypothesis that the ‘difficulty’ of thermodynamics is associated with the pedagogical approach of tutorials rather than actual difficulty in subject content or deficiency in students. APPROACH Successful application of thermodynamic principles requires solid knowledge of the core concepts. Typically, tutorial sessions guide students in this application. However, a lack of deep and comprehensive understanding can lead to student confusion in the applications resulting in the learning of the ‘process’ of application without understanding ‘why’. The aim of this study was to gain empirical data on student learning of both concepts and application, within thermodynamic tutorials. The approach taken for data collection and analysis was: - 1 Four concurrent tutorial streams were timetabled to examine student engagement/learning in traditional ‘didactic’ (3 weeks) and non-traditional (3 weeks). In each week, two of the selected four sessions were traditional and two non-traditional. This provided a control group for each week. - 2 The non-traditional tutorials involved activities designed to promote student-centered deep learning. Specific pedagogies employed were: self-generated analogies, constructivist, peer-to-peer learning, inquiry based learning, ownership of learning and active learning. - 3 After a three-week period, teaching styles of the selected groups was switched, to allow each group to experience both approaches with the same tutor. This also acted to mimimise any influence of tutor personality / style on the data. - 4 At the conclusion of the trial participants completed a ‘5 minute essay’ on how they liked the sessions, a small questionnaire, modelled on the modified (Christo & Hoang, 2013)SPQ designed by Biggs (1987) and a small formative quiz to gauge the level of learning achieved. DISCUSSION Preliminary results indicate that overall students respond positively to in class demonstrations (inquiry based learning), and active learning activities. Within the active learning exercises, the current data suggests students preferred individual rather than group or peer-to-peer activities. Preliminary results from the open-ended questions such as “What did you like most/least about this tutorial” and “do you have other comments on how this tutorial could better facilitate your learning”, however, indicated polarising views on the nontraditional tutorial. Some student’s responded that they really like the format and emphasis on understanding the concepts, while others were very vocal that that ‘hated’ the style and just wanted the solutions to be presented by the tutor. RECOMMENDATIONS/IMPLICATIONS/CONCLUSION Preliminary results indicated a mixed, but overall positive response by students with more collaborative tutorials employing tasks promoting inquiry based, peer-to-peer, active, and ownership of learning activities. Preliminary results from student feedback supports evidence that students learn differently, and running tutorials focusing on only one pedagogical approached (typically didactic) may not be beneficial to all students. Further, preliminary data suggests that the learning / teaching style of both students and tutor are important to promoting deep learning in students. Data collection is still ongoing and scheduled for completion at the end of First Semester (Australian academic calendar). The final paper will examine in more detail the results and analysis of this project.
Resumo:
This research project investigated a bioreactor system capable of high density cell growth intended for use in regenerative medicine and protein production. The bioreactor was based on a drip-perfusion concept and constructed with minimal costs, readily available components, and straightforward processes for usage. This study involved the design, construction, and testing of the bioreactor where the results showed promising three dimensional cell growth within a polymer structure. The accessibility of this equipment and the capability of high density, three dimensional cell growth would be suitable for future research in pharmaceutical drug manufacturing, and human organ and tissue regeneration.
Resumo:
In this thesis we study a series of multi-user resource-sharing problems for the Internet, which involve distribution of a common resource among participants of multi-user systems (servers or networks). We study concurrently accessible resources, which for end-users may be exclusively accessible or non-exclusively. For all kinds we suggest a separate algorithm or a modification of common reputation scheme. Every algorithm or method is studied from different perspectives: optimality of protocols, selfishness of end users, fairness of the protocol for end users. On the one hand the multifaceted analysis allows us to select the most suited protocols among a set of various available ones based on trade-offs of optima criteria. On the other hand, the future Internet predictions dictate new rules for the optimality we should take into account and new properties of the networks that cannot be neglected anymore. In this thesis we have studied new protocols for such resource-sharing problems as the backoff protocol, defense mechanisms against Denial-of-Service, fairness and confidentiality for users in overlay networks. For backoff protocol we present analysis of a general backoff scheme, where an optimization is applied to a general-view backoff function. It leads to an optimality condition for backoff protocols in both slot times and continuous time models. Additionally we present an extension for the backoff scheme in order to achieve fairness for the participants in an unfair environment, such as wireless signal strengths. Finally, for the backoff algorithm we suggest a reputation scheme that deals with misbehaving nodes. For the next problem -- denial-of-service attacks, we suggest two schemes that deal with the malicious behavior for two conditions: forged identities and unspoofed identities. For the first one we suggest a novel most-knocked-first-served algorithm, while for the latter we apply a reputation mechanism in order to restrict resource access for misbehaving nodes. Finally, we study the reputation scheme for the overlays and peer-to-peer networks, where resource is not placed on a common station, but spread across the network. The theoretical analysis suggests what behavior will be selected by the end station under such a reputation mechanism.
Resumo:
Place identification is the methodology of automatically detecting spatial regions or places that are meaningful to a user by analysing her location traces. Following this approach several algorithms have been proposed in the literature. Most of the algorithms perform well on a particular data set with suitable choice of parameter values. However, tuneable parameters make it difficult for an algorithm to generalise to data sets collected from different geographical locations, different periods of time or containing different activities. This thesis compares the generalisation performance of our proposed DPCluster algorithm along with six state-of-the-art place identification algorithms on twelve location data sets collected using Global Positioning System (GPS). Spatial and temporal variations present in the data help us to identify strengths and weaknesses of the place identification algorithms under study. We begin by discussing the notion of a place and its importance in location-aware computing. Next, we discuss different phases of the place identification process found in the literature followed by a thorough description of seven algorithms. After that, we define evaluation metrics and compare generalisation performance of individual place identification algorithms and report the results. The results indicate that the DPCluster algorithm performs superior to all other algorithms in terms of generalisation performance.
Resumo:
E-government provides a platform for governments to implement web enabled services that facilitate communication between citizens and the government. However, technology driven design approach and limited understanding of citizens' requirements, have led to a number of critical usability problems on the government websites. Hitherto, there has been no systematic attempt to analyse the way in which theory of User Centred Design (UCD) can contribute to address the usability issues of government websites. This research seeks to fill this gap by synthesising perspectives drawn from the study of User Centred Design and examining them based on the empirical data derived from case study of the Scottish Executive website. The research employs a qualitative approach in the collection and analysis of data. The triangulated analysis of the findings reveals that e-government web designers take commercial development approach and focus only on technical implementations which lead to websites that do not meet citizens' expectations. The research identifies that e-government practitioners can overcome web usability issues by transferring the theory of UCD to practice.
Resumo:
E-government provides a platform for governments to implement web-enabled services that facilitate communication between citizens and the government. However, technology-driven design approach and limited understanding of citizens' requirements have led to a number of critical usability problems on the government websites. Hitherto, there has been no systematic attempt to analyse the way in which theory of User-Centred Design (UCD) can contribute to address the usability issues of government websites. This research seeks to fill this gap by synthesising perspectives drawn from the study of UCD and examining them based on the empirical data derived from case study of the Scottish Executive (SE) website. The research employs a qualitative approach in the collection and analysis of data. The triangulated analysis of the findings reveals that e-government web designers take commercial development approach and focus only on technical implementations, which lead to websites that do not meet citizens' expectations. The research identifies that e-government practitioners can overcome web usability issues by transferring the theory of UCD to practice. © Copyright 2010 Inderscience Enterprises Ltd.
Resumo:
Business process models have become an effective way of examining business practices to identify areas for improvement. While common information gathering approaches are generally efficacious, they can be quite time consuming and have the risk of developing inaccuracies when information is forgotten or incorrectly interpreted by analysts. In this study, the potential of a role-playing approach to process elicitation and specification has been examined. This method allows stakeholders to enter a virtual world and role-play actions similarly to how they would in reality. As actions are completed, a model is automatically developed, removing the need for stakeholders to learn and understand a modelling grammar. An empirical investigation comparing both the modelling outputs and participant behaviour of this virtual world role-play elicitor with an S-BPM process modelling tool found that while the modelling approaches of the two groups varied greatly, the virtual world elicitor may not only improve both the number of individual process task steps remembered and the correctness of task ordering, but also provide a reduction in the time required for stakeholders to model a process view.
Resumo:
In wireless ad hoc networks, nodes communicate with far off destinations using intermediate nodes as relays. Since wireless nodes are energy constrained, it may not be in the best interest of a node to always accept relay requests. On the other hand, if all nodes decide not to expend energy in relaying, then network throughput will drop dramatically. Both these extreme scenarios (complete cooperation and complete noncooperation) are inimical to the interests of a user. In this paper, we address the issue of user cooperation in ad hoc networks. We assume that nodes are rational, i.e., their actions are strictly determined by self interest, and that each node is associated with a minimum lifetime constraint. Given these lifetime constraints and the assumption of rational behavior, we are able to determine the optimal share of service that each node should receive. We define this to be the rational Pareto optimal operating point. We then propose a distributed and scalable acceptance algorithm called Generous TIT-FOR-TAT (GTFT). The acceptance algorithm is used by the nodes to decide whether to accept or reject a relay request. We show that GTFT results in a Nash equilibrium and prove that the system converges to the rational and optimal operating point.