881 resultados para flash crowd attack
Resumo:
The mass media and emergency services organisations routinely gather information and disseminate it to the public. During disaster situations both the media and emergency services require acute situational awareness. New social media technologies offer opportunities to enhance situational awareness by crowd-sourcing information using real and virtual social networks. This paper documents how real and virtual social networks were used by a reporter and by members of the public to gather and disseminate emergency information during the flash flood disaster in Toowoomba and the Lockyer Valley in January 2011 and in the days and weeks after the disaster.
Resumo:
Practice-led journalism research techniques were used in this study to produce a ‘first draft of history’ recording the human experience of survivors and rescuers during the January 2011 flash flood disaster in Toowoomba and the Lockyer Valley in Queensland, Australia. The study aimed to discover what can be learnt from engaging in journalistic reporting of natural disasters. This exegesis demonstrates that journalism can be both a creative practice and a research methodology. About 120 survivors, rescuers and family members of victims participated in extended interviews about what happened to them and how they survived. Their stories are the basis for two creative outputs of the study: a radio documentary and a non-fiction book, that document how and why people died, or survived, or were rescued. Listeners and readers are taken "into the flood" where they feel anxious for those in peril, relief when people are saved, and devastated when babies, children and adults are swept away to their deaths. In undertaking reporting about the human experience of the floods, several significant elements about journalistic reportage of disasters were exposed. The first related to the vital role that the online social media played during the disaster for individuals, citizen reporters, journalists and emergency services organisations. Online social media offer reporters powerful new reporting tools for both gathering and disseminating news. The second related to the performance of journalists in covering events involving traumatic experiences. Journalists are often required to cover trauma and are often amongst the first-responders to disasters. This study found that almost all of the disaster survivors who were approached were willing to talk in detail about their traumatic experiences. A finding of this project is that journalists who interview trauma survivors can develop techniques for improving their ability to interview people who have experienced traumatic events. These include being flexible with interview timing and selecting a location; empowering interviewees to understand they don’t have to answer every question they are asked; providing emotional security for interviewees; and by being committed to accuracy. Survivors may exhibit posttraumatic stress symptoms but some exhibit and report posttraumatic growth. The willingness of a high proportion of the flood survivors to participate in the flood research made it possible to document a relatively unstudied question within the literature about journalism and trauma – when and why disaster survivors will want to speak to reporters. The study sheds light on the reasons why a group of traumatised people chose to speak about their experiences. Their reasons fell into six categories: lessons need to be learned from the disaster; a desire for the public to know what had happened; a sense of duty to make sure warning systems and disaster responses to be improved in future; personal recovery; the financial disinterest of reporters in listening to survivors; and the timing of the request for an interview. Feedback to the creative-practice component of this thesis - the book and radio documentary - shows that these issues are not purely matters of ethics. By following appropriate protocols, it is possible to produce stories that engender strong audience responses such as that the program was "amazing and deeply emotional" and "community storytelling at its most important". Participants reported that the experience of the interview process was "healing" and that the creative outcome resulted in "a very precious record of an afternoon of tragedy and triumph and the bitter-sweetness of survival".
Resumo:
Automated crowd counting has become an active field of computer vision research in recent years. Existing approaches are scene-specific, as they are designed to operate in the single camera viewpoint that was used to train the system. Real world camera networks often span multiple viewpoints within a facility, including many regions of overlap. This paper proposes a novel scene invariant crowd counting algorithm that is designed to operate across multiple cameras. The approach uses camera calibration to normalise features between viewpoints and to compensate for regions of overlap. This compensation is performed by constructing an 'overlap map' which provides a measure of how much an object at one location is visible within other viewpoints. An investigation into the suitability of various feature types and regression models for scene invariant crowd counting is also conducted. The features investigated include object size, shape, edges and keypoints. The regression models evaluated include neural networks, K-nearest neighbours, linear and Gaussian process regresion. Our experiments demonstrate that accurate crowd counting was achieved across seven benchmark datasets, with optimal performance observed when all features were used and when Gaussian process regression was used. The combination of scene invariance and multi camera crowd counting is evaluated by training the system on footage obtained from the QUT camera network and testing it on three cameras from the PETS 2009 database. Highly accurate crowd counting was observed with a mean relative error of less than 10%. Our approach enables a pre-trained system to be deployed on a new environment without any additional training, bringing the field one step closer toward a 'plug and play' system.
Resumo:
The 2 hour game jam was performed as part of the State Library of Queensland 'Garage Gamer' series of events, summer 2013, at the SLQ exhibition. An aspect of the exhibition was the series of 'Level Up' game nights. We hosted the first of these - under the auspices of brIGDA, Game On. It was a party - but the focal point of the event was a live streamed 2 hour game jam. Game jams have become popular amongst the game development and design community in recent years, particularly with the growth of the Global Game Jam, a yearly event which brings thousands of game makers together across different sites in different countries. Other established jams take place on-line, for example the Ludum Dare challenge which as been running since 2002. Other challenges follow the same model in more intimate circumstances and it is now common to find institutions and groups holding their own small local game making jams. There are variations around the format, some jams are more competitive than others for example, but a common aspect is the creation of an intense creative crucible centred around team work and ‘accelerated game development’. Works (games) produced during these intense events often display more experimental qualities than those undertaken as commercial projects. In part this is because the typical jam is started with a conceptual design brief, perhaps a single word, or in the case of the specific game jam described in this paper, three words. Teams have to envision the challenge key word/s as a game design using whatever skills and technologies they can and produce a finished working game in the time given. Game jams thus provide design researchers with extraordinary fodder and recent years have also seen a number of projects which seek to illuminate the design process as seen in these events. For example, Gaydos, Harris and Martinez discuss the opportunity of the jam to expose students to principles of design process and design spaces (2011). Rouse muses on the game jam ‘as radical practice’ and a ‘corrective to game creation as it is normally practiced’. His observations about his own experience in a jam emphasise the same artistic endeavour forefronted earlier, where the experience is about creation that is divorced from the instrumental motivations of commercial game design (Rouse 2011) and where the focus is on process over product. Other participants remark on the social milieu of the event as a critical factor and the collaborative opportunity as a rich site to engage participants in design processes (Shin et al, 2012). Shin et al are particularly interested in the notion of the site of the process and the ramifications of participants being in the same location. They applaud the more localized event where there is an emphasis on local participation and collaboration. For other commentators, it is specifically the social experience in the place of the jam is the most important aspect (See Keogh 2011), not the material site but rather the physical embodied experience of ‘being there’ and being part of the event. Participants talk about game jams they have attended in a similar manner to those observations made by Dourish where the experience is layered on top of the physical space of the event (Dourish 2006). It is as if the event has taken on qualities of place where we find echoes of Tuan’s description of a particular site having an aura of history that makes it a very different place, redolent and evocative (Tuan 1977). The 2 hour game jam held during the SLQ Garage Gamer program was all about social experience.
Resumo:
Novel computer vision techniques have been developed for automatic monitoring of crowed environments such as airports, railway stations and shopping malls. Using video feeds from multiple cameras, the techniques enable crowd counting, crowd flow monitoring, queue monitoring and abnormal event detection. The outcome of the research is useful for surveillance applications and for obtaining operational metrics to improve business efficiency.
Resumo:
The Modicon Communication Bus (Modbus) protocol is one of the most commonly used protocols in industrial control systems. Modbus was not designed to provide security. This paper confirms that the Modbus protocol is vulnerable to flooding attacks. These attacks involve injection of commands that result in disrupting the normal operation of the control system. This paper describes a set of experiments that shows that an anomaly-based change detection algorithm and signature-based Snort threshold module are capable of detecting Modbus flooding attacks. In comparing these intrusion detection techniques, we find that the signature-based detection requires a carefully selected threshold value, and that the anomaly-based change detection algorithm may have a short delay before detecting the attacks depending on the parameters used. In addition, we also generate a network traffic dataset of flooding attacks on the Modbus control system protocol.
Resumo:
Persistent monitoring of the ocean is not optimally accomplished by repeatedly executing a fixed path in a fixed location. The ocean is dynamic, and so should the executed paths to monitor and observe it. An open question merging autonomy and optimal sampling is how and when to alter a path/decision, yet achieve desired science objectives. Additionally, many marine robotic deployments can last multiple weeks to months; making it very difficult for individuals to continuously monitor and retask them as needed. This problem becomes increasingly more complex when multiple platforms are operating simultaneously. There is a need for monitoring and adaptation of the robotic fleet via teams of scientists working in shifts; crowds are ideal for this task. In this paper, we present a novel application of crowd-sourcing to extend the autonomy of persistent-monitoring vehicles to enable nonrepetitious sampling over long periods of time. We present a framework that enables the control of a marine robot by anybody with an internet-enabled device. Voters are provided current vehicle location, gathered science data and predicted ocean features through the associated decision support system. Results are included from a simulated implementation of our system on a Wave Glider operating in Monterey Bay with the science objective to maximize the sum of observed nitrate values collected.
Resumo:
Flash flood disasters happen suddenly. The Toowoomba Lockyer Valley flash flood in January 2011 was not forecast by the Bureau of Meteorology until after it had occurred. Domestic and wild animals gave the first warning of the disaster in the days leading up to the event and large animals gave warnings on the morning of the disaster. Twenty-three people, including 5 children in the disaster zone died. More than 500 people were listed as missing. Some of those who died, perished because they stayed in the disaster zone to look after their animals while other members of their family escaped to safety. Some people who were in danger refused to be rescued because they could not take their pets with them. During a year spent recording accounts of the survivors of the disaster, animals were often mentioned by survivors. Despite the obvious perils, people risked their lives to save their animals; people saw animals try to save each other; animals rescued people; people rescued animals; animals survived where people died; animals were used to find human victims in the weeks after the disaster; and animals died. The stories of the flood present challenges for pet owners, farmers, counter disaster planners, weather forecasters and emergency responders in preparing for disasters, responding to them and recovering after them.
Resumo:
Numeric set watermarking is a way to provide ownership proof for numerical data. Numerical data can be considered to be primitives for multimedia types such as images and videos since they are organized forms of numeric information. Thereby, the capability to watermark numerical data directly implies the capability to watermark multimedia objects and discourage information theft on social networking sites and the Internet in general. Unfortunately, there has been very limited research done in the field of numeric set watermarking due to underlying limitations in terms of number of items in the set and LSBs in each item available for watermarking. In 2009, Gupta et al. proposed a numeric set watermarking model that embeds watermark bits in the items of the set based on a hash value of the items’ most significant bits (MSBs). If an item is chosen for watermarking, a watermark bit is embedded in the least significant bits, and the replaced bit is inserted in the fractional value to provide reversibility. The authors show their scheme to be resilient against the traditional subset addition, deletion, and modification attacks as well as secondary watermarking attacks. In this paper, we present a bucket attack on this watermarking model. The attack consists of creating buckets of items with the same MSBs and determine if the items of the bucket carry watermark bits. Experimental results show that the bucket attack is very strong and destroys the entire watermark with close to 100% success rate. We examine the inherent weaknesses in the watermarking model of Gupta et al. that leave it vulnerable to the bucket attack and propose potential safeguards that can provide resilience against this attack.
Resumo:
This paper examines the use of crowdfunding platforms to fund academic research. Looking specifically at the use of a Pozible campaign to raise funds for a small pilot research study into home education in Australia, the paper reports on the success and problems of using the platform. It also examines the crowdsourcing of literature searching as part of the package. The paper looks at the realities of using this type of platform to gain start–up funding for a project and argues that families and friends are likely to be the biggest supporters. The finding that family and friends are likely to be the highest supporters supports similar work in the arts communities that are traditionally served by crowdfunding platforms. The paper argues that, with exceptions, these platforms can be a source of income in times where academics are finding it increasingly difficult to source government funding for projects.
Resumo:
Using Media-Access-Control (MAC) address for data collection and tracking is a capable and cost effective approach as the traditional ways such as surveys and video surveillance have numerous drawbacks and limitations. Positioning cell-phones by Global System for Mobile communication was considered an attack on people's privacy. MAC addresses just keep a unique log of a WiFi or Bluetooth enabled device for connecting to another device that has not potential privacy infringements. This paper presents the use of MAC address data collection approach for analysis of spatio-temporal dynamics of human in terms of shared space utilization. This paper firstly discuses the critical challenges and key benefits of MAC address data as a tracking technology for monitoring human movement. Here, proximity-based MAC address tracking is postulated as an effective methodology for analysing the complex spatio-temporal dynamics of human movements at shared zones such as lounge and office areas. A case study of university staff lounge area is described in detail and results indicates a significant added value of the methodology for human movement tracking. By analysis of MAC address data in the study area, clear statistics such as staff’s utilisation frequency, utilisation peak periods, and staff time spent is obtained. The analyses also reveal staff’s socialising profiles in terms of group and solo gathering. The paper is concluded with a discussion on why MAC address tracking offers significant advantages for tracking human behaviour in terms of shared space utilisation with respect to other and more prominent technologies, and outlines some of its remaining deficiencies.
Resumo:
At Crypto 2008, Shamir introduced a new algebraic attack called the cube attack, which allows us to solve black-box polynomials if we are able to tweak the inputs by varying an initialization vector. In a stream cipher setting where the filter function is known, we can extend it to the cube attack with annihilators: By applying the cube attack to Boolean functions for which we can find low-degree multiples (equivalently annihilators), the attack complexity can be improved. When the size of the filter function is smaller than the LFSR, we can improve the attack complexity further by considering a sliding window version of the cube attack with annihilators. Finally, we extend the cube attack to vectorial Boolean functions by finding implicit relations with low-degree polynomials.
Resumo:
This paper firstly presents the benefits and critical challenges on the use of Bluetooth and Wi-Fi for crowd data collection and monitoring. The major challenges include antenna characteristics, environment’s complexity and scanning features. Wi-Fi and Bluetooth are compared in this paper in terms of architecture, discovery time, popularity of use and signal strength. Type of antennas used and the environment’s complexity such as trees for outdoor and partitions for indoor spaces highly affect the scanning range. The aforementioned challenges are empirically evaluated by “real” experiments using Bluetooth and Wi-Fi Scanners. The issues related to the antenna characteristics are also highlighted by experimenting with different antenna types. Novel scanning approaches including Overlapped Zones and Single Point Multi-Range detection methods will be then presented and verified by real-world tests. These novel techniques will be applied for location identification of the MAC IDs captured that can extract more information about people movement dynamics.
Resumo:
We present a text watermarking scheme that embeds a bitstream watermark Wi in a text document P preserving the meaning, context, and flow of the document. The document is viewed as a set of paragraphs, each paragraph being a set of sentences. The sequence of paragraphs and sentences used to embed watermark bits is permuted using a secret key. Then, English language sentence transformations are used to modify sentence lengths, thus embedding watermarking bits in the Least Significant Bits (LSB) of the sentences’ cardinalities. The embedding and extracting algorithms are public, while the secrecy and security of the watermark depends on a secret key K. The probability of False Positives is extremely small, hence avoiding incidental occurrences of our watermark in random text documents. Majority voting provides security against text addition, deletion, and swapping attacks, further reducing the probability of False Positives. The scheme is secure against the general attacks on text watermarks such as reproduction (photocopying, FAX), reformatting, synonym substitution, text addition, text deletion, text swapping, paragraph shuffling and collusion attacks.
Resumo:
A well-known attack on RSA with low secret-exponent d was given by Wiener about 15 years ago. Wiener showed that using continued fractions, one can efficiently recover the secret-exponent d from the public key (N,e) as long as d < N 1/4. Interestingly, Wiener stated that his attack may sometimes also work when d is slightly larger than N 1/4. This raises the question of how much larger d can be: could the attack work with non-negligible probability for d=N 1/4 + ρ for some constant ρ > 0? We answer this question in the negative by proving a converse to Wiener’s result. Our result shows that, for any fixed ε > 0 and all sufficiently large modulus lengths, Wiener’s attack succeeds with negligible probability over a random choice of d < N δ (in an interval of size Ω(N δ )) as soon as δ > 1/4 + ε. Thus Wiener’s success bound d