851 resultados para users
Resumo:
Immersive environments are part of a recent media innovation that allow users to become so involved within a computer-based simulated environment that they feel part of that virtual world (Grigorovici, 2003). A specific example is Second Life, which is an internet-based, three-dimensional immersive virtual world in which users create an online representation of themselves (an avatar) to play games and interact socially with thousands of people simultaneously. This study focuses on Second Life as an example of an immersive environment, as it is the largest adult freeform virtual world, home to 12 million avatars (IOWA State University, 2008). Already in Second Life there are more than 100 real-life brands from a range of industries, including automotive, professional services, and consumer goods and travel, among others (KZero, 2007; New Business Horizons, 2009). Compared to traditional advertising media, this interactive media can immerse users in the environment. As a result of this interactivity, users can become more involved with a virtual environment, resulting in prolonged usage over weeks, months and even years. Also, it can facilitate presence. Despite these developments, little is known about the effectiveness of marketing messages in a virtual world context. Marketers are incorporating products into Second Life using a strategy of online product placement. This study, therefore, explores the perceived effectiveness of online product placement in Second Life in terms of effects on product/brand recall, purchase intentions and trial. This research examines the association between individuals’ involvement with Second Life and online product placement effectiveness, as well as the relationship between individuals’ Second Life involvement and the effectiveness of online product placement. In addition, it investigates the association of immersion and product placement involvement. It also examines the impact of product placement involvement on online product placement effectiveness and the role of presence in affecting this relationship. An exploratory study was conducted for this research using semi-structured in-depth interviews face-to-face, email-based and in-world. The sample comprised 24 active Second Life users. Results indicate that product placement effectiveness is not directly associated with Second Life involvement, but rather effectiveness is impacted through the effect of Second Life involvement on product placement involvement. A positive relationship was found between individuals’ product placement involvement and online product placement effectiveness. Findings also indicate that online product placement effectiveness is not directly associated with immersion. Rather, it appears that effectiveness is impacted through the effect of immersion on product placement involvement. Moreover, higher levels of presence appear to have a positive impact on the relationship between product placement involvement and product placement effectiveness. Finally, a model was developed from this qualitative study for future testing. In terms of theoretical contributions, this study provides a new model for testing the effectiveness of product placement within immersive environments. From a methodological perspective, in-world interviews as a new research method were undertaken. In terms of a practical contribution, findings identified useful information for marketers and advertising agencies that aim to promote their products in immersive virtual environments like Second Life.
Resumo:
Unified Enterprise application security is a new emerging approach for providing protection against application level attacks. Conventional application security approach that consists of embedding security into each critical application leads towards scattered security mechanism that is not only difficult to manage but also creates security loopholes. According to the CSIIFBI computer crime survey report, almost 80% of the security breaches come from authorized users. In this paper, we have worked on the concept of unified security model, which manages all security aspect from a single security window. The basic idea is to keep business functionality separate from security components of the application. Our main focus was on the designing of frame work for unified layer which supports single point of policy control, centralize logging mechanism, granular, context aware access control, and independent from any underlying authentication technology and authorization policy.
Resumo:
This literature review examines the relationship between traffic lane widths on the safety of road users. It focuses on the impacts of lane widths on motor vehicle behaviour and cyclists’ safety. The review commenced with a search of available databases. Peer reviewed articles and road authority reports were reviewed, as well as current engineering guidelines. Research shows that traffic lane width influences drivers’ perceived difficulty of the task, risk perception and possibly speed choices. Total roadway width, and the presence of onroad cycling facilities, influence cyclists’ positioning on the road. Lateral displacement between bicycles and vehicles is smallest when a marked bicycle facility is present. Reduced motor vehicle speeds can significantly improve the safety of vulnerable road users, particularly pedestrians and cyclists. It has been shown that if road lane widths on urban roads were reduced, through various mechanisms, it could result in a safety environment for all road users.
Resumo:
Distributed Denial of Services DDoS, attacks has become one of the biggest threats for resources over Internet. Purpose of these attacks is to make servers deny from providing services to legitimate users. These attacks are also used for occupying media bandwidth. Currently intrusion detection systems can just detect the attacks but cannot prevent / track the location of intruders. Some schemes also prevent the attacks by simply discarding attack packets, which saves victim from attack, but still network bandwidth is wasted. In our opinion, DDoS requires a distributed solution to save wastage of resources. The paper, presents a system that helps us not only in detecting such attacks but also helps in tracing and blocking (to save the bandwidth as well) the multiple intruders using Intelligent Software Agents. The system gives dynamic response and can be integrated with the existing network defense systems without disturbing existing Internet model. We have implemented an agent based networking monitoring system in this regard.
Resumo:
While IS function has gained widespread attention for over two decades, there is little consensus among information systems (IS) researchers and practitioners on how best to evaluate IS function's support performance. This paper reports on preliminary findings of a larger research effort proceeds from a central interest in the importance of evaluating IS function's support in organisations. This study is the first that attempts to re-conceptualise and conceive evaluate IS function's support as a multi- dimensional formative construct. We argue that a holistic measure for evaluating evaluate IS function's support should consist of dimensions that together assess the variety of the support functions and the quality of the support services provided to end-users. Thus, the proposed model consists of two halves, "Variety" and "Quality" within which resides seven dimensions. The Variety half includes five dimensions: Training; Documentation; Data- related Support, Software-related Support; and Hardware-related Support. The Quality half includes two dimensions: IS Support Staff and Support Services Performance. The proposed model is derived using a directed content analysis of 83 studies; from top IS outlets, employing the characteristics of the analytic theory and consistent with formative construct development procedures.
Resumo:
Since the formal recognition of practice-led research in the 1990s, many higher research degree candidates in art, design and media have submitted creative works along with an accompanying written document or ‘exegesis’ for examination. Various models for the exegesis have been proposed in university guidelines and academic texts during the past decade, and students and supervisors have experimented with its contents and structure. With a substantial number of exegeses submitted and archived, it has now become possible to move beyond proposition to empirical analysis. In this article we present the findings of a content analysis of a large, local sample of submitted exegeses. We identify the emergence of a persistent pattern in the types of content included as well as overall structure. Besides an introduction and conclusion, this pattern includes three main parts, which can be summarized as situating concepts (conceptual definitions and theories); precedents of practice (traditions and exemplars in the field); and researcher’s creative practice (the creative process, the artifacts produced and their value as research). We argue that this model combines earlier approaches to the exegesis, which oscillated between academic objectivity, by providing a contextual framework for the practice, and personal reflexivity, by providing commentary on the creative practice. But this model is more than simply a hybrid: it provides a dual orientation, which allows the researcher to both situate their creative practice within a trajectory of research and do justice to its personally invested poetics. By performing the important function of connecting the practice and creative work to a wider emergent field, the model helps to support claims for a research contribution to the field. We call it a connective model of exegesis.
Resumo:
Interactive documents for use with the World Wide Web have been developed for viewing multi-dimensional radiographic and visual images of human anatomy, derived from the Visible Human Project. Emphasis has been placed on user-controlled features and selections. The purpose was to develop an interface which was independent of host operating system and browser software which would allow viewing of information by multiple users. The interfaces were implemented using HyperText Markup Language (HTML) forms, C programming language and Perl scripting language. Images were pre-processed using ANALYZE and stored on a Web server in CompuServe GIF format. Viewing options were included in the document design, such as interactive thresholding and two-dimensional slice direction. The interface is an example of what may be achieved using the World Wide Web. Key applications envisaged for such software include education, research and accessing of information through internal databases and simultaneous sharing of images by remote computers by health personnel for diagnostic purposes.
Resumo:
Our objective was to determine the factors that lead users to continue working with process modeling grammars after their initial adoption. We examined the explanatory power of three theoretical models of IT usage by applying them to two popular process modeling grammars. We found that a hybrid model of technology acceptance and expectation-confirmation best explained user intentions to continue using the grammars. We examined differences in the model results, and used them to provide three contributions. First, the study confirmed the applicability of IT usage models to the domain of process modeling. Second, we discovered that differences in continued usage intentions depended on the grammar type instead of the user characteristics. Third, we suggest implications and practice.
Resumo:
A significant amount (ca. 15-25 GL/a) of PRW (Purified Recycled Water) from urban areas is foreseen as augmentation of the depleted groundwater resources of the Lockyer Valley (approx. 80 km west of Brisbane). Theresearch project uses field investigations, lab trials and modelling techniques to address the key challenges: (i) how to determine benefits of individual users from the augmentation of a natural common pool resource; (ii) how to minimise impacts of applying different quality water on the Lockyer soils, to creeks and on aquifier materials; (iii) how to minimuse mobilisation of salts in the unsaturated and saturated zones as a result of increased deep drainage; (iv) determination of potential for direct aquifer recharge using injection wells?
Resumo:
Computer profiling is the automated forensic examination of a computer system in order to provide a human investigator with a characterisation of the activities that have taken place on that system. As part of this process, the logical components of the computer system – components such as users, files and applications - are enumerated and the relationships between them discovered and reported. This information is enriched with traces of historical activity drawn from system logs and from evidence of events found in the computer file system. A potential problem with the use of such information is that some of it may be inconsistent and contradictory thus compromising its value. This work examines the impact of temporal inconsistency in such information and discusses two types of temporal inconsistency that may arise – inconsistency arising out of the normal errant behaviour of a computer system, and inconsistency arising out of deliberate tampering by a suspect – and techniques for dealing with inconsistencies of the latter kind. We examine the impact of deliberate tampering through experiments conducted with prototype computer profiling software. Based on the results of these experiments, we discuss techniques which can be employed in computer profiling to deal with such temporal inconsistencies.
Resumo:
Using information and communication technology devices in public urban places can help to create a personalised space. Looking at a mobile phone screen or listening to music on an MP3 player is a common practice avoiding direct contact with others e.g. whilst using public transport. However, such devices can also be utilised to explore how to build new meaningful connections with the urban space and the collocated people within. We present findings of work-in-progress on Capital Music, a mobile application enabling urban dwellers to listen to music songs as usual, but also allowing them to announce song titles and discover songs currently being listened to by other people in the vicinity. We study the ways that this tool can change or even enhance people’s experience of public urban spaces. Our first user study also found changes in choosing different songs. Anonymous social interactions based on users’ music selection are implemented in the first iteration of the prototype that we studied.
Resumo:
While close talking microphones give the best signal quality and produce the highest accuracy from current Automatic Speech Recognition (ASR) systems, the speech signal enhanced by microphone array has been shown to be an effective alternative in a noisy environment. The use of microphone arrays in contrast to close talking microphones alleviates the feeling of discomfort and distraction to the user. For this reason, microphone arrays are popular and have been used in a wide range of applications such as teleconferencing, hearing aids, speaker tracking, and as the front-end to speech recognition systems. With advances in sensor and sensor network technology, there is considerable potential for applications that employ ad-hoc networks of microphone-equipped devices collaboratively as a virtual microphone array. By allowing such devices to be distributed throughout the users’ environment, the microphone positions are no longer constrained to traditional fixed geometrical arrangements. This flexibility in the means of data acquisition allows different audio scenes to be captured to give a complete picture of the working environment. In such ad-hoc deployment of microphone sensors, however, the lack of information about the location of devices and active speakers poses technical challenges for array signal processing algorithms which must be addressed to allow deployment in real-world applications. While not an ad-hoc sensor network, conditions approaching this have in effect been imposed in recent National Institute of Standards and Technology (NIST) ASR evaluations on distant microphone recordings of meetings. The NIST evaluation data comes from multiple sites, each with different and often loosely specified distant microphone configurations. This research investigates how microphone array methods can be applied for ad-hoc microphone arrays. A particular focus is on devising methods that are robust to unknown microphone placements in order to improve the overall speech quality and recognition performance provided by the beamforming algorithms. In ad-hoc situations, microphone positions and likely source locations are not known and beamforming must be achieved blindly. There are two general approaches that can be employed to blindly estimate the steering vector for beamforming. The first is direct estimation without regard to the microphone and source locations. An alternative approach is instead to first determine the unknown microphone positions through array calibration methods and then to use the traditional geometrical formulation for the steering vector. Following these two major approaches investigated in this thesis, a novel clustered approach which includes clustering the microphones and selecting the clusters based on their proximity to the speaker is proposed. Novel experiments are conducted to demonstrate that the proposed method to automatically select clusters of microphones (ie, a subarray), closely located both to each other and to the desired speech source, may in fact provide a more robust speech enhancement and recognition than the full array could.
Resumo:
One of the most celebrated qualities of the Internet is its enabling of simultaneity and multiplicity. By allowing users to open as many windows into the world as they (and their computers) can withstand, the Internet is understood to have brought places and cultures together on a scale and in a manner unprecedented. Yet, while the Internet has enabled many to reconnect with cultures and places long distanced and/or lost, it has also led to the belief that these reconnections are established with little correspondent cost to existent ties of belonging. In this paper, I focus on the dilemma multiple belongings engender for the ties of national belonging and question the sanguinity of multiple belongings as practised online. In particular, I use Lefebvre's notion of lived space to unpack the problems and contradictions of what has been called 'Greater China' for the ethnic Chinese minority in nations like Malaysia, Singapore and Australia.
Resumo:
Authorised users (insiders) are behind the majority of security incidents with high financial impacts. Because authorisation is the process of controlling users’ access to resources, improving authorisation techniques may mitigate the insider threat. Current approaches to authorisation suffer from the assumption that users will (can) not depart from the expected behaviour implicit in the authorisation policy. In reality however, users can and do depart from the canonical behaviour. This paper argues that the conflict of interest between insiders and authorisation mechanisms is analogous to the subset of problems formally studied in the field of game theory. It proposes a game theoretic authorisation model that can ensure users’ potential misuse of a resource is explicitly considered while making an authorisation decision. The resulting authorisation model is dynamic in the sense that its access decisions vary according to the changes in explicit factors that influence the cost of misuse for both the authorisation mechanism and the insider.
Resumo:
Annual reports are an important component of New Zealand schools’ public accountability. Through the annual report the governance body informs stakeholders about school aims, objectives, achievements, use of resources, and financial performance. This paper identifies the perceived usefulness of the school annual report to recipients and the extent to which it serves as an instrument of accountability and/or decision-usefulness. The study finds that the annual report is used for a variety of purposes, including: to determine if the school has conducted its activities effectively and achieved stated objectives and goals; to examine student achievements; to assess financial accountability and performance; and to make decisions about the school as a suitable environment for their child/children. Nevertheless, the study also finds that other forms of communication are more important sources of information about the school than the annual report which is seen to fall short of users’ required qualities of understandability, reliability and readability. It would appear imperative that policy makers review the functional role of the school annual report which is a costly document to prepare. Further, school managers need to engage in alternative means to communicate sufficient and meaningful information in the discharge of public accountability.