92 resultados para Hot spots, levels of delinquency, citizen’s security, critical areas, Bogotá.
em Queensland University of Technology - ePrints Archive
Resumo:
Refactoring focuses on improving the reusability, maintainability and performance of programs. However, the impact of refactoring on the security of a given program has received little attention. In this work, we focus on the design of object-oriented applications and use metrics to assess the impact of a number of standard refactoring rules on their security by evaluating the metrics before and after refactoring. This assessment tells us which refactoring steps can increase the security level of a given program from the point of view of potential information flow, allowing application designers to improve their system’s security at an early stage.
Resumo:
Air pollution levels were monitored continuously over a period of 4 weeks at four sampling sites along a busy urban corridor in Brisbane. The selected sites were representative of industrial and residential types of urban environment affected by vehicular traffic emissions. The concentration levels of submicrometer particle number, PM2.5, PM10, CO, and NOx were measured 5-10 meters from the road. Meteorological parameters and traffic flow rates were also monitored. The data were analysed in terms of the relationship between monitored pollutants and existing ambient air quality standards. The results indicate that the concentration levels of all pollutants exceeded the ambient air background levels, in certain cases by up to an order of magnitude. While the 24-hr average concentration levels did not exceed the standard, estimates for the annual averages were close to, or even higher than the annual standard levels.
Resumo:
Maize streak virus strain A (MSV-A), the causal agent of maize streak disease, is today one of the most serious biotic threats to African food security. Determining where MSV-A originated and how it spread transcontinentally could yield valuable insights into its historical emergence as a crop pathogen. Similarly, determining where the major extant MSV-A lineages arose could identify geographical hot spots of MSV evolution. Here, we use model-based phylogeographic analyses of 353 fully sequenced MSV-A isolates to reconstruct a plausible history of MSV-A movements over the past 150 years. We show that since the probable emergence of MSV-A in southern Africa around 1863, the virus spread transcontinentally at an average rate of 32.5 km/year (95% highest probability density interval, 15.6 to 51.6 km/year). Using distinctive patterns of nucleotide variation caused by 20 unique intra-MSV-A recombination events, we tentatively classified the MSV-A isolates into 24 easily discernible lineages. Despite many of these lineages displaying distinct geographical distributions, it is apparent that almost all have emerged within the past 4 decades from either southern or east-central Africa. Collectively, our results suggest that regular analysis of MSV-A genomes within these diversification hot spots could be used to monitor the emergence of future MSV-A lineages that could affect maize cultivation in Africa. © 2011, American Society for Microbiology.
Resumo:
Identification of hot spots, also known as the sites with promise, black spots, accident-prone locations, or priority investigation locations, is an important and routine activity for improving the overall safety of roadway networks. Extensive literature focuses on methods for hot spot identification (HSID). A subset of this considerable literature is dedicated to conducting performance assessments of various HSID methods. A central issue in comparing HSID methods is the development and selection of quantitative and qualitative performance measures or criteria. The authors contend that currently employed HSID assessment criteria—namely false positives and false negatives—are necessary but not sufficient, and additional criteria are needed to exploit the ordinal nature of site ranking data. With the intent to equip road safety professionals and researchers with more useful tools to compare the performances of various HSID methods and to improve the level of HSID assessments, this paper proposes four quantitative HSID evaluation tests that are, to the authors’ knowledge, new and unique. These tests evaluate different aspects of HSID method performance, including reliability of results, ranking consistency, and false identification consistency and reliability. It is intended that road safety professionals apply these different evaluation tests in addition to existing tests to compare the performances of various HSID methods, and then select the most appropriate HSID method to screen road networks to identify sites that require further analysis. This work demonstrates four new criteria using 3 years of Arizona road section accident data and four commonly applied HSID methods [accident frequency ranking, accident rate ranking, accident reduction potential, and empirical Bayes (EB)]. The EB HSID method reveals itself as the superior method in most of the evaluation tests. In contrast, identifying hot spots using accident rate rankings performs the least well among the tests. The accident frequency and accident reduction potential methods perform similarly, with slight differences explained. The authors believe that the four new evaluation tests offer insight into HSID performance heretofore unavailable to analysts and researchers.
Resumo:
Large trucks are involved in a disproportionately small fraction of the total crashes but a disproportionately large fraction of fatal crashes. Large truck crashes often result in significant congestion due to their large physical dimensions and from difficulties in clearing crash scenes. Consequently, preventing large truck crashes is critical to improving highway safety and operations. This study identifies high risk sites (hot spots) for large truck crashes in Arizona and examines potential risk factors related to the design and operation of the high risk sites. High risk sites were identified using both state of the practice methods (accident reduction potential using negative binomial regression with long crash histories) and a newly proposed method using Property Damage Only Equivalents (PDOE). The hot spots identified via the count model generally exhibited low fatalities and major injuries but large minor injuries and PDOs, while the opposite trend was observed using the PDOE methodology. The hot spots based on the count model exhibited large AADTs, whereas those based on the PDOE showed relatively small AADTs but large fractions of trucks and high posted speed limits. Documented site investigations of hot spots revealed numerous potential risk factors, including weaving activities near freeway junctions and ramps, absence of acceleration lanes near on-ramps, small shoulders to accommodate large trucks, narrow lane widths, inadequate signage, and poor lighting conditions within a tunnel.
Resumo:
Defence organisations perform information security evaluations to confirm that electronic communications devices are safe to use in security-critical situations. Such evaluations include tracing all possible dataflow paths through the device, but this process is tedious and error-prone, so automated reachability analysis tools are needed to make security evaluations faster and more accurate. Previous research has produced a tool, SIFA, for dataflow analysis of basic digital circuitry, but it cannot analyse dataflow through microprocessors embedded within the circuit since this depends on the software they run. We have developed a static analysis tool that produces SIFA compatible dataflow graphs from embedded microcontroller programs written in C. In this paper we present a case study which shows how this new capability supports combined hardware and software dataflow analyses of a security critical communications device.
Resumo:
Existing secure software development principles tend to focus on coding vulnerabilities, such as buffer or integer overflows, that apply to individual program statements, or issues associated with the run-time environment, such as component isolation. Here we instead consider software security from the perspective of potential information flow through a program’s object-oriented module structure. In particular, we define a set of quantifiable "security metrics" which allow programmers to quickly and easily assess the overall security of a given source code program or object-oriented design. Although measuring quality attributes of object-oriented programs for properties such as maintainability and performance has been well-covered in the literature, metrics which measure the quality of information security have received little attention. Moreover, existing securityrelevant metrics assess a system either at a very high level, i.e., the whole system, or at a fine level of granularity, i.e., with respect to individual statements. These approaches make it hard and expensive to recognise a secure system from an early stage of development. Instead, our security metrics are based on well-established compositional properties of object-oriented programs (i.e., data encapsulation, cohesion, coupling, composition, extensibility, inheritance and design size), combined with data flow analysis principles that trace potential information flow between high- and low-security system variables. We first define a set of metrics to assess the security quality of a given object-oriented system based on its design artifacts, allowing defects to be detected at an early stage of development. We then extend these metrics to produce a second set applicable to object-oriented program source code. The resulting metrics make it easy to compare the relative security of functionallyequivalent system designs or source code programs so that, for instance, the security of two different revisions of the same system can be compared directly. This capability is further used to study the impact of specific refactoring rules on system security more generally, at both the design and code levels. By measuring the relative security of various programs refactored using different rules, we thus provide guidelines for the safe application of refactoring steps to security-critical programs. Finally, to make it easy and efficient to measure a system design or program’s security, we have also developed a stand-alone software tool which automatically analyses and measures the security of UML designs and Java program code. The tool’s capabilities are demonstrated by applying it to a number of security-critical system designs and Java programs. Notably, the validity of the metrics is demonstrated empirically through measurements that confirm our expectation that program security typically improves as bugs are fixed, but worsens as new functionality is added.
Resumo:
This study proposes a framework of a model-based hot spot identification method by applying full Bayes (FB) technique. In comparison with the state-of-the-art approach [i.e., empirical Bayes method (EB)], the advantage of the FB method is the capability to seamlessly integrate prior information and all available data into posterior distributions on which various ranking criteria could be based. With intersection crash data collected in Singapore, an empirical analysis was conducted to evaluate the following six approaches for hot spot identification: (a) naive ranking using raw crash data, (b) standard EB ranking, (c) FB ranking using a Poisson-gamma model, (d) FB ranking using a Poisson-lognormal model, (e) FB ranking using a hierarchical Poisson model, and (f) FB ranking using a hierarchical Poisson (AR-1) model. The results show that (a) when using the expected crash rate-related decision parameters, all model-based approaches perform significantly better in safety ranking than does the naive ranking method, and (b) the FB approach using hierarchical models significantly outperforms the standard EB approach in correctly identifying hazardous sites.
Resumo:
Severe dioxin contamination at Bien Hoa and Da Nang airbases, Vietnam is of international concern. Public Health risk reduction programs were implemented in Bien Hoa in 2007-2009 and in Da Nang in 2009-2011. In 2009 and 2011 we reported the encouraging results of these interventions in improving the knowledge, attitude and practices (KAP) of local residents in reducing the dioxin exposure risk through foods. In 2013 we revisited these dioxin hot spots, aimed to evaluate whether the results of the intervention were maintained and to identify factors affecting the sustainability of the programs. To assess this, 16 in-depth interviews, six focus group discussions, and pre and post intervention KAP surveys were undertaken. 800 respondents from six intervention wards and 200 respondents from Buu Long Ward (the control site) were randomly selected to participate in the surveys. The results showed that as of 2013, the programs were rated as "moderately sustained" with a score of 3.3 out of 5.0 (cut off points 2.5 to <3.5) for Bien Hoa, and "well sustained" with a score of 3.8 out of 5.0 (cut off points 3.5 to <4.5) for Da Nang. Most formal intervention program activities had ceased and dioxin risk communication activities were no longer integrated into local routine health education programs. However, the main outcomes were maintained and were better than that in the control ward. Migration, lack of official guidance from City People's Committees and local authorities as well as the politically sensitive nature of dioxin issues were the main challenges for the sustainability of the programs.
Resumo:
This study assessed environmental health risk from dioxin in foods and sustainability of risk reduction programs at two heavily contaminated former military sites in Vietnam. The study involved 1000 household surveys, analysis of food samples and in-depth discussions with residents and officials. The findings indicate that more than 40 years after the war, local residents still experience high exposure to dioxin if they consume local high risk foods. Public health intervention programs were rated moderately to well sustained. Internal migration, and lack of clear, official guidance and sensitivity regarding dioxin issues were the main challenges for sustainability of prevention programs.
Resumo:
Background Bien Hoa and Da Nang airbases were bulk storages for Agent Orange during the Vietnam War and currently are the two most severe dioxin hot spots. Objectives This study assesses the health risk of exposure to dioxin through foods for local residents living in seven wards surrounding these airbases. Methods This study follows the Australian Environmental Health Risk Assessment Framework to assess the health risk of exposure to dioxin in foods. Forty-six pooled samples of commonly consumed local foods were collected and analyzed for dioxin/furans. A food frequency and Knowledge–Attitude–Practice survey was also undertaken at 1000 local households, various stakeholders were involved and related publications were reviewed. Results Total dioxin/furan concentrations in samples of local “high-risk” foods (e.g. free range chicken meat and eggs, ducks, freshwater fish, snail and beef) ranged from 3.8 pg TEQ/g to 95 pg TEQ/g, while in “low-risk” foods (e.g. caged chicken meat and eggs, seafoods, pork, leafy vegetables, fruits, and rice) concentrations ranged from 0.03 pg TEQ/g to 6.1 pg TEQ/g. Estimated daily intake of dioxin if people who did not consume local high risk foods ranged from 3.2 pg TEQ/kg bw/day to 6.2 pg TEQ/kg bw/day (Bien Hoa) and from 1.2 pg TEQ/kg bw/day to 4.3 pg TEQ/kg bw/day (Da Nang). Consumption of local high risk foods resulted in extremely high dioxin daily intakes (60.4–102.8 pg TEQ/kg bw/day in Bien Hoa; 27.0–148.0 pg TEQ/kg bw/day in Da Nang). Conclusions Consumption of local “high-risk” foods increases dioxin daily intakes far above the WHO recommended TDI (1–4 pg TEQ/kg bw/day). Practicing appropriate preventive measures is necessary to significantly reduce exposure and health risk.
Resumo:
The literature supporting the notion that active, student-centered learning is superior to passive, teacher-centered instruction is encyclopedic (Bonwell & Eison, 1991; Bruning, Schraw, & Ronning, 1999; Haile, 1997a, 1997b, 1998; Johnson, Johnson, & Smith, 1999). Previous action research demonstrated that introducing a learning activity in class improved the learning outcomes of students (Mejias, 2010). People acquire knowledge and skills through practice and reflection, not by watching and listening to others telling them how to do something. In this context, this project aims to find more insights about the level of interactivity in the curriculum a class should have and its alignment with assessment so the intended learning outcomes (ILOs) are achieved. In this project, interactivity is implemented in the form of problem- based learning (PBL). I present the argument that a more continuous formative feedback when implemented with the correct amount of PBL stimulates student engagement bringing enormous benefits to student learning. Different levels of practical work (PBL) were implemented together with two different assessment approaches in two subjects. The outcomes were measured using qualitative and quantitative data to evaluate the levels of student engagement and satisfaction in the terms of ILOs.
Resumo:
This paper describes in detail our Security-Critical Program Analyser (SCPA). SCPA is used to assess the security of a given program based on its design or source code with regard to data flow-based metrics. Furthermore, it allows software developers to generate a UML-like class diagram of their program and annotate its confidential classes, methods and attributes. SCPA is also capable of producing Java source code for the generated design of a given program. This source code can then be compiled and the resulting Java bytecode program can be used by the tool to assess the program's overall security based on our security metrics.
Resumo:
Purpose This paper examines the relationship between flood exposure and levels of social trust among a cohort of adult men from refugee backgrounds who were affected by the 2011 Queensland floods in Australia. Design/methodology/approach A quantitative questionnaire was administered to 141 men from refugee backgrounds almost two years after the 2011 Queensland floods. The survey was administered in person by trained peer in-terviewers, and included a number of standardised instruments assessing respondents’ so-cio-demographic characteristics, levels of social trust towards and from neighbours, the police, the wider Australian community, and the media, and exposure to and impact of the floods. Multiple logistic regression analyses were used to assess the relationship between flood exposure and social trust adjusting for pre-disaster levels of trust and other potentially confounding variables. Findings Participants with higher levels of flood exposure were significantly more likely to report greater levels of trust both towards and from their neighbours, the wider Australian community, and the media, and they were also more likely to believe that most people can be trusted. Research limitations/implications Although the study reports on data collected two years after the floods, the analysis has adjusted for pre-disaster measures of social trust and other socio-demographic variables. Originality/value Our paper has highlighted the important place of social trust and social capital for refugee communities in a post-disaster setting. Disaster responses that support social capital among marginalised populations are critical to increasing community resilience and supporting recovery.
Resumo:
As the global intellectual property (IP) system grows and now impacts virtually all citizens, it is crucial that the means to understand these rights and their teachings, as well as their implications and scope become global public goods. To do so requires not only that the primary data is available freely and openly in a standardized and re-useable form, but that tools to visualize, analyse and model that data are similarly open and free public goods, adaptable to diverse needs and uses; this we call ‘transparency’.