113 resultados para Wave Set-up
Resumo:
Unified communications as a service (UCaaS) can be regarded as a cost-effective model for on-demand delivery of unified communications services in the cloud. However, addressing security concerns has been seen as the biggest challenge to the adoption of IT services in the cloud. This study set up a cloud system via VMware suite to emulate hosting unified communications (UC), the integration of two or more real time communication systems, services in the cloud in a laboratory environment. An Internet Protocol Security (IPSec) gateway was also set up to support network-level security for UCaaS against possible security exposures. This study was aimed at analysis of an implementation of UCaaS over IPSec and evaluation of the latency of encrypted UC traffic while protecting that traffic. Our test results show no latency while IPSec is implemented with a G.711 audio codec. However, the performance of the G.722 audio codec with an IPSec implementation affects the overall performance of the UC server. These results give technical advice and guidance to those involved in security controls in UC security on premises as well as in the cloud.
Resumo:
The Australian Naturalistic Driving Study (ANDS), a ground-breaking study of Australian driver behaviour and performance, was officially launched on April 21st, 2015 at UNSW. The ANDS project will provide a realistic perspective on the causes of vehicle crashes and near miss crash events, along with the roles speeding, distraction and other factors have on such events. A total of 360 volunteer drivers across NSW and Victoria - 180 in NSW and 180 in Victoria - will be monitored by a Data Acquisition System (DAS) recording continuously for 4 months their driving behaviour using a suite of cameras and sensors. Participants’ driving behaviour (e.g. gaze), the behaviour of their vehicle (e.g. speed, lane position) and the behaviour of other road users with whom they interact in normal and safety-critical situations will be recorded. Planning of the ANDS commenced over two years ago in June 2013 when the Multi-Institutional Agreement for a grant supporting the equipment purchase and assembly phase was signed by parties involved in this large scale $4 million study (5 university accident research centres, 3 government regulators, 2 third party insurers and 2 industry partners). The program’s second development phase commenced a year later in June 2014 after a second grant was awarded. This paper presents an insider's view into that two year process leading up to the launch, and outlines issues that arose in the set-up phase of the study and how these were addressed. This information will be useful to other organisations considering setting up an NDS.
Resumo:
Background To reduce nursing shortages, accelerated nursing programs are available for domestic and international students. However, the withdrawal and failure rates from these programs may be different than for the traditional programs. The main aim of our study was to improve the retention and experience of accelerated nursing students. Methods The academic background, age, withdrawal and failure rates of the accelerated and traditional students were determined. Data from 2009 and 2010 were collected prior to intervention. In an attempt to reduce the withdrawal of accelerated students, we set up an intervention, which was available to all students. The assessment of the intervention was a pre-post-test design with non-equivalent groups (the traditional and the accelerated students). The elements of the intervention were a) a formative website activity of some basic concepts in anatomy, physiology and pharmacology, b) a workshop addressing study skills and online resources, and c) resource lectures in anatomy/physiology and microbiology. The formative website and workshop was evaluated using questionnaires. Results The accelerated nursing students were five years older than the traditional students (p < 0.0001). The withdrawal rates from a pharmacology course are higher for accelerated nursing students, than for traditional students who have undertaken first year courses in anatomy and physiology (p = 0.04 in 2010). The withdrawing students were predominantly the domestic students with non-university qualifications or equivalent experience. The failure rates were also higher for this group, compared to the traditional students (p = 0.05 in 2009 and 0.03 in 2010). In contrast, the withdrawal rates for the international and domestic graduate accelerated students were very low. After the intervention, the withdrawal and failure rates in pharmacology for domestic accelerated students with non-university qualifications were not significantly different than those of traditional students. Conclusions The accelerated international and domestic graduate nursing students have low withdrawal rates and high success rates in a pharmacology course. However, domestic students with non-university qualifications have higher withdrawal and failure rates than other nursing students and may be underprepared for university study in pharmacology in nursing programs. The introduction of an intervention was associated with reduced withdrawal and failure rates for these students in the pharmacology course.
Resumo:
Many existing companies have set up corporate websites in response to competitive pressures and/or the perceived advantages of having a presence in marketspace. However, the effect of this form of communication and/or way of doing business on the corporate brand has yet to be examined in detail. In this article we argue that the translation of corporate brand values from marketplace to marketspace is often problematic, leading to inconsistencies in the way that the brand values are interpreted. Some of issues discussed are: 1) the effect of changed organizational boundaries on the corporate brand, 2) the need to examine whether it is strategically feasible to translate the corporate brand values from marketplace to marketspace, 3) the inherent difficulty in communicating the emotional aspects of the corporate brand in marketspace, and 4) the need to manage the online brand, in terms of its consistency with the offline brand. The conclusion reached is that a necessary part of the process of embracing marketspace as part of a corporate brand strategy is a plan to manage the consistency and continuity of the corporate brand when applied to the Internet. In cases where this is not achievable, a separate corporate brand or a brand extension is a preferable alternative.
Resumo:
The research reported in this paper documents the use of Web2.0 applications with six Western Australian schools that are considered to be regional and/or remote. With a population of two million people within an area of 2,525,500 square kilometres Western Australia has a number of towns that are classified as regional and remote. Each of the three education systems have set up telecommunications networks to improve learning opportunities for students and administrative services for staff through a virtual private network (VPN) with access from anywhere, anytime and ultimately reduce the feeling of professional and social dislocation experienced by many teachers and students in the isolated communities. By using Web2.0 applications including video conferencing there are enormous opportunities to close the digital divide within the broad directives of the Networking the Nation plan. The Networking the Nation plan aims to connect all Australians regardless of where they are hence closing the digital divide between city and regional living. Email and Internet facilities have greatly improved in rural, regional and remote areas supporting every day school use of the Internet. This study highlights the possibilities and issues for advanced telecommunications usage of Web2.0 applications discussing the research undertaken with these schools. (Contains 1 figure and 3 tables.)
Resumo:
A computational model for isothermal axisymmetric turbulent flow in a quarl burner is set up using the CFD package FLUENT, and numerical solutions obtained from the model are compared with available experimental data. A standard k-e model and and two versions of the RNG k-e model are used to model the turbulence. One of the aims of the computational study is to investigate whether the RNG based k-e turbulence models are capable of yielding improved flow predictions compared with the standard k-e turbulence model. A difficulty is that the flow considered here features a confined vortex breakdown which can be highly sensitive to flow behaviour both upstream and downstream of the breakdown zone. Nevertheless, the relatively simple confining geometry allows us to undertake a systematic study so that both grid-independent and domain-independent results can be reported. The systematic study includes a detailed investigation of the effects of upstream and downstream conditions on the predictions, in addition to grid refinement and other tests to ensure that numerical error is not significant. Another important aim is to determine to what extent the turbulence model predictions can provide us with new insights into the physics of confined vortex breakdown flows. To this end, the computations are discussed in detail with reference to known vortex breakdown phenomena and existing theories. A major conclusion is that one of the RNG k-e models investigated here is able to correctly capture the complex forward flow region inside the recirculating breakdown zone. This apparently pathological result is in stark contrast to the findings of previous studies, most of which have concluded that either algebraic or differential Reynolds stress modelling is needed to correctly predict the observed flow features. Arguments are given as to why an isotropic eddy-viscosity turbulence model may well be able to capture the complex flow structure within the recirculating zone for this flow setup. With regard to the flow physics, a major finding is that the results obtained here are more consistent with the view that confined vortex breakdown is a type of axisymmetric boundary layer separation, rather than a manifestation of a subcritical flow state.
Resumo:
Deep packet inspection is a technology which enables the examination of the content of information packets being sent over the Internet. The Internet was originally set up using “end-to-end connectivity” as part of its design, allowing nodes of the network to send packets to all other nodes of the network, without requiring intermediate network elements to maintain status information about the transmission. In this way, the Internet was created as a “dumb” network, with “intelligent” devices (such as personal computers) at the end or “last mile” of the network. The dumb network does not interfere with an application's operation, nor is it sensitive to the needs of an application, and as such it treats all information sent over it as (more or less) equal. Yet, deep packet inspection allows the examination of packets at places on the network which are not endpoints, In practice, this permits entities such as Internet service providers (ISPs) or governments to observe the content of the information being sent, and perhaps even manipulate it. Indeed, the existence and implementation of deep packet inspection may challenge profoundly the egalitarian and open character of the Internet. This paper will firstly elaborate on what deep packet inspection is and how it works from a technological perspective, before going on to examine how it is being used in practice by governments and corporations. Legal problems have already been created by the use of deep packet inspection, which involve fundamental rights (especially of Internet users), such as freedom of expression and privacy, as well as more economic concerns, such as competition and copyright. These issues will be considered, and an assessment of the conformity of the use of deep packet inspection with law will be made. There will be a concentration on the use of deep packet inspection in European and North American jurisdictions, where it has already provoked debate, particularly in the context of discussions on net neutrality. This paper will also incorporate a more fundamental assessment of the values that are desirable for the Internet to respect and exhibit (such as openness, equality and neutrality), before concluding with the formulation of a legal and regulatory response to the use of this technology, in accordance with these values.
Resumo:
The potential to cultivate new relationships with spectators has long been cited as a primary motivator for those using digital technologies to construct networked or telematics performances or para-performance encounters in which performers and spectators come together in virtual – or at least virtually augmented – spaces and places. Today, with Web 2.0 technologies such as social media platforms becoming increasingly ubiquitous, and increasingly easy to use, more and more theatre makers are developing digitally mediated relationships with spectators. Sometimes for the purpose of an aesthetic encounter, sometimes for critical encounter, or sometimes as part of an audience politicisation, development or engagement agenda. Sometimes because this is genuinely an interest, and sometimes because spectators or funding bodies expect at least some engagement via Facebook, Twitter or Instagram. In this paper, I examine peculiarities and paradoxes emerging in some of these efforts to engage spectators via networked performance or para-performance encounters. I use examples ranging from theatre, to performance art, to political activism – from ‘cyberformaces’ on Helen Varley Jamieson’s Upstage Avatar Performance Platform, to Wafaa Bilal’s Domestic Tension installation where spectators around the world could use a webcam in a chat room to target him with paintballs while he was in residence in a living room set up in a gallery for a week, as a comment on use of drone technology in war, to Liz Crow’s Bedding Out where she invited people to physically and virtually join her in her bedroom to discuss the impact of an anti-disabled austerity politics emerging in her country, to Dislife’s use of holograms of disabled people popping up in disabled parking spaces when able bodied drivers attempted to pull into them, amongst others. I note the frequency with which these performance practices deploy discourses of democratisation, participation, power and agency to argue that these technologies assist in positioning spectators as co-creators actively engaged in the evolution of a performance (and, in politicised pieces that point to racism, sexism, or ableism, pushing spectators to reflect on their agency in that dramatic or daily-cum-dramatic performance of prejudice). I investigate how a range of issues – from the scenographic challenges in deploying networked technologies for both participant and bystander audiences others have already noted, to the siloisation of aesthetic, critical and audience activation activities on networked technologies, to conventionalised dramaturgies of response informed by power, politics and impression management that play out in online as much as offline performances, to the high personal, social and professional stakes involved in participating in a form where spectators responses are almost always documented, recorded and re-represented to secondary and tertiary sets of spectators via the circulation into new networks social media platforms so readily facilitate – complicate discourses of democratic co-creativity associated with networked performance and para-performance activities.