883 resultados para User-generated content
Resumo:
We investigate how email users' characteristics influence their response to phishing emails. A user generally goes through three stages of behaviour upon receiving a phishing email: suspicion of the legitimacy of the email, confirmation of its legitimacy and response by either performing the action requested in the phishing email or not. Using a mixed method approach combining experiments, surveys and semi-structured interviews, we found that a user's behaviour at each stage varies with their personal characteristics such as personality traits and ability to perceive information in an email beyond its content. We found, for example, that users who are submissive, extraverted or open tend to be less suspicious of phishing emails while users who can identify cues such as inconsistent IP address, can avoid falling victim to phishing emails. Our findings enable us to draw practical implications for educating and potentially reducing the incidence of phishing emails victimisation.
Resumo:
Engineering design processes are necessary to attain the requisite standards of integrity for high-assurance safety-related systems. Additionally, human factors design initiatives can provide critical insights that parameterise their development. Unfortunately, the popular perception of human factors as a “forced marriage” between engineering and psychology often provokes views where the ‘human factor’ is perceived as a threat to systems design. Some popular performance-based standards for developing safety-related systems advocate identifying and managing human factors throughout the system lifecycle. However, they also have a tendency to fall short in their guidance on the application of human factors methods and tools, let alone how the outputs generated can be integrated in to various stages of the design process. This case study describes a project that converged engineering with human factors to develop a safety argument for new low-cost railway level crossing technology for system-wide implementation in Australia. The paper enjoins the perspectives of a software engineer and cognitive psychologist and their involvement in the project over two years of collaborative work to develop a safety argument for low-cost level crossing technology. Safety and reliability requirements were informed by applying human factors analytical tools that supported the evaluation and quantification of human reliability where users interfaced with the technology. The project team was confronted with significant challenges in cross-disciplinary engagement, particularly with the complexities of dealing with incongruences in disciplinary language. They were also encouraged to think ‘outside the box’ as to how users of a system interpreted system states and ehaviour. Importantly, some of these states, while considered safe within the boundary of the constituent systems that implemented safety-related functions, could actually lead the users to engage in deviant behaviour. Psychology explained how user compliance could be eroded to levels that effectively undermined levels of risk reduction afforded by systems. Linking the engineering and psychology disciplines intuitively, overall safety performance was improved by introducing technical requirements and making design decisions that minimized the system states and behaviours that led to user deviancy. As a commentary on the utility of transdisciplinary collaboration for technical specification, the processes used to bridge the two disciplines are conceptualised in a graphical model.
Resumo:
There has long been widespread community, political and industry agreement over the important place of Australian content in the media mix, but debate continues over what counts as such in different media, how that content is defined, how much there should be, whether particular genres should be privileged, who should finance production, and how all of these things should be regulated.
Resumo:
To harness safe operation of Web-based systems in Web environments, we propose an SSPA (Server-based SHA-1 Page-digest Algorithm) to verify the integrity of Web contents before the server issues an HTTP response to a user request. In addition to standard security measures, our Java implementation of the SSPA, which is called the Dynamic Security Surveillance Agent (DSSA), provides further security in terms of content integrity to Web-based systems. Its function is to prevent the display of Web contents that have been altered through the malicious acts of attackers and intruders on client machines. This is to protect the reputation of organisations from cyber-attacks and to ensure the safe operation of Web systems by dynamically monitoring the integrity of a Web site's content on demand. We discuss our findings in terms of the applicability and practicality of the proposed system. We also discuss its time metrics, specifically in relation to its computational overhead at the Web server, as well as the overall latency from the clients' point of view, using different Internet access methods. The SSPA, our DSSA implementation, some experimental results and related work are all discussed
Resumo:
This article presents the field applications and validations for the controlled Monte Carlo data generation scheme. This scheme was previously derived to assist the Mahalanobis squared distance–based damage identification method to cope with data-shortage problems which often cause inadequate data multinormality and unreliable identification outcome. To do so, real-vibration datasets from two actual civil engineering structures with such data (and identification) problems are selected as the test objects which are then shown to be in need of enhancement to consolidate their conditions. By utilizing the robust probability measures of the data condition indices in controlled Monte Carlo data generation and statistical sensitivity analysis of the Mahalanobis squared distance computational system, well-conditioned synthetic data generated by an optimal controlled Monte Carlo data generation configurations can be unbiasedly evaluated against those generated by other set-ups and against the original data. The analysis results reconfirm that controlled Monte Carlo data generation is able to overcome the shortage of observations, improve the data multinormality and enhance the reliability of the Mahalanobis squared distance–based damage identification method particularly with respect to false-positive errors. The results also highlight the dynamic structure of controlled Monte Carlo data generation that makes this scheme well adaptive to any type of input data with any (original) distributional condition.
Resumo:
A dynamic accumulator is an algorithm, which merges a large set of elements into a constant-size value such that for an element accumulated, there is a witness confirming that the element was included into the value, with a property that accumulated elements can be dynamically added and deleted into/from the original set. Recently Wang et al. presented a dynamic accumulator for batch updates at ICICS 2007. However, their construction suffers from two serious problems. We analyze them and propose a way to repair their scheme. We use the accumulator to construct a new scheme for common secure indices with conjunctive keyword-based retrieval.
Resumo:
Different magnetization in vertical graphenes fabricated by plasma-enabled chemical conversion of organic precursors with various oxygen atom contents and bonding energies was achieved. The graphenes grown from fat-like precursors exhibit magnetization up to 8 emu g−1, whereas the use of sugar-containing precursors results in much lower numbers. A relatively high Curie temperature exceeding 600 K was also demonstrated.
Resumo:
In 2009, the Capital Markets Development Authority (CMDA) - Fiji’s capital market regulator - introduced the Code of Corporate Governance (the Code). The Code is ‘principle-based’ and requires companies listed on the South Pacific Stock Exchange (SPSE) and the financial intermediaries to disclose their compliance with the Code’s principles. While compliance with the Code is mandatory, the nature and extent of disclosure is at the discretion of the complying entities. Agency theory and signalling theory suggest that firms with higher expected levels of agency costs will provide greater levels of voluntary disclosures as signals of strong corporate governance. Thus, the study seeks to test these theories by examining the heterogeneity of corporate governance disclosures by firms listed on SPSE, and determining the characteristics of firms that provide similar levels of disclosures. We conducted a content analysis of corporate governance disclosures on the annual reports of firms from 2008-2012. The study finds that large, non-family owned firms with high levels of shareholder dispersion provide greater quantity and higher quality corporate governance disclosures. For firms that are relatively smaller, family owned and have low levels of shareholder dispersion, the quantity and quality of corporate governance disclosures are much lower. Some of these firms provide boilerplate disclosures with minimal changes in the following years. These findings support the propositions of agency and signalling theory, which suggest that firms with higher separation between agents and principals will provide more voluntary disclosures to reduce expected agency costs transfers. Semi-structured interviews conducted with key stakeholders further reinforce the findings. The interviews also reveal that complying entities positively perceive the introduction of the Code. Furthermore, while compliance with Code brought about additional costs, they believed that most of these costs were minimal and one-off, and the benefits of greater corporate disclosure to improve user decision making outweighed the costs. The study contributes to the literature as it provides insight into the experience of a small capital market with introducing a ‘principle-based’ Code that attempts to encourage corporate governance practices through enhanced disclosure. The study also assists policy makers better understand complying entities’ motivations for compliance and the extent of compliance.
Resumo:
Controlled self-organized growth of vertically aligned carbon nanocone arrays in a radio frequency inductively coupled plasma-based process is studied. The experiments have demonstrated that the gaps between the nanocones, density of the nanocone array, and the shape of the nanocones can be effectively controlled by the process parameters such as gas composition (hydrogen content) and electrical bias applied to the substrate. Optical measurements have demonstrated lower reflectance of the nanocone array as compared with a bare Si wafer, thus evidencing their potential for the use in optical devices. The nanocone formation mechanism is explained in terms of redistribution of surface and volumetric fluxes of plasma-generated species in a developing nanocone array and passivation of carbon in narrow gaps where the access of plasma ions is hindered. Extensive numerical simulations were used to support the proposed growth mechanism.
Resumo:
Background Person-to-person transmission of respiratory pathogens, including Pseudomonas aeruginosa, is a challenge facing many cystic fibrosis (CF) centres. Viable P aeruginosa are contained in aerosols produced during coughing, raising the possibility of airborne transmission. Methods Using purpose-built equipment, we measured viable P aeruginosa in cough aerosols at 1, 2 and 4 m from the subject (distance) and after allowing aerosols to age for 5, 15 and 45 min in a slowly rotating drum to minimise gravitational settling and inertial impaction (duration). Aerosol particles were captured and sized employing an Anderson Impactor and cultured using conventional microbiology. Sputum was also cultured and lung function and respiratory muscle strength measured. Results Nineteen patients with CF, mean age 25.8 (SD 9.2) years, chronically infected with P aeruginosa, and 10 healthy controls, 26.5 (8.7) years, participated. Viable P aeruginosa were detected in cough aerosols from all patients with CF, but not from controls; travelling 4 m in 17/18 (94%) and persisting for 45 min in 14/18 (78%) of the CF group. Marked inter-subject heterogeneity of P aeruginosa aerosol colony counts was seen and correlated strongly (r=0.73–0.90) with sputum bacterial loads. Modelling decay of viable P aeruginosa in a clinic room suggested that at the recommended ventilation rate of two air changes per hour almost 50 min were required for 90% to be removed after an infected patient left the room. Conclusions Viable P aeruginosa in cough aerosols travel further and last longer than recognised previously, providing additional evidence of airborne transmission between patients with CF.
Resumo:
The session explores the potential for “Patron Driven Acquisition” (PDA) as a model for the acquisition of online video. Today, PDA has become a standard model of acquisition in the eBook market, more effectively aligning spend with use and increased return on investment (ROI). PDA is an unexplored model for acquisition of video, for which library collection development is complicated by higher storage and delivery costs, labor overheads for content selection and acquisition, and a dynamic film industry in which media and the technology that supports it is changing daily. Queensland University of Technology (QUT) and La Trobe University in Australia launched a research project in collaboration with Kanopy to explore the opportunity for PDA of video. The study relied on three data sources: (1) national surveys to compare the video purchasing and use practices of colleges, (2) on-campus pilot projects of PDA models to assess user engagement and behavior, and (3) testing of various user applications and features to support the model. The study incorporates usage statistics and survey data and builds upon a peer-reviewed research paper presented at the VALA 2014 conference in Melbourne, Australia. This session will be conducted by the researchers and will graphically present the results from the study. It will map out a future for video PDA, and how libraries can more cost-effectively acquire and maximize the discoverability of online video. The presenters will also solicit input and welcome questions from audience members.
Resumo:
Web servers are accessible by anyone who can access the Internet. Although this universal accessibility is attractive for all kinds of Web-based applications, Web servers are exposed to attackers who may want to alter their contents. Alterations range from humorous additions or changes, which are typically easy to spot, to more sinister tampering, such as providing false or damaging information.
Resumo:
In recent years fine and ultra fine particles emitted from internal combustion engines have attracted an increasing level of attention. This attention has arisen from epidemiological studies conducted by a number of research groups and pointing to the health effects resulting from inhalation of fine particles. Previous studies on the influence of fuel sulfur level on diesel vehicle emissions were mainly concentrated on particle mass emissions. This study aims at investigating the influence of the reduction of diesel fuel sulfur level on the emission and formation of nanoparticles
Resumo:
In this paper we demonstrate that existing cooperative spectrum sensing formulated for static primary users cannot accurately detect dynamic primary users regardless of the information fusion method. Performance error occurs as the sensing parameters calculated by the conventional detector result in sensing performance that violates the sensing requirements. Furthermore, the error is accumulated and compounded by the number of cooperating nodes. To address this limitation, we design and implement the duty cycle detection model for the context of cooperative spectrum sensing to accurately calculate the sensing parameters that satisfy the sensing requirements. We show that longer sensing duration is required to compensate for dynamic primary user traffic.
Resumo:
Falling sales in Europe and increasing global competition is forcing automotive manufacturers to develop a customer-based approach to differentiate themselves from the similarly technologically-optimised crowd. In spite of this new approach, automotive firms are still firmly entrenched in their reliance upon technology-driven innovation, to design, develop and manufacture their products, placing customer focus on a downstream sales role. However the time-honoured technology-driven approach to vehicle design and manufacture is coming into question, with the increasing importance of accounting for consumer needs pushing automotive engineers to include the user in their designs. The following paper examines the challenges and opportunities for a single global automotive manufacturer that arise in seeking to adopt a user-centred approach to vehicle design amongst technical employees. As part of an embedded case study, engineers from this manufacturer were interviewed in order to gauge the challenges, barriers and opportunities for the adoption of user-centred design tools within the engineering design process. The analysis of these interviews led to the proposal of the need for a new role within automotive manufacturers, the “designeer”, to bridge the divide between designers and engineers and allow the engineering process to transition from a technology-driven to a user- centred approach.