999 resultados para Oriented Cue Exposure
Resumo:
The aim of this work was to quantify exposure to particles emitted by wood-fired ovens in pizzerias. Overall, 15 microenvironments were chosen and analyzed in a 14-month experimental campaign. Particle number concentration and distribution were measured simultaneously using a Condensation Particle Counter (CPC), a Scanning Mobility Particle Sizer (SMPS), an Aerodynamic Particle Sizer (APS). The surface area and mass distributions and concentrations, as well as the estimation of lung deposition surface area and PM1 were evaluated using the SMPS-APS system with dosimetric models, by taking into account the presence of aggregates on the basis of the Idealized Aggregate (IA) theory. The fraction of inhaled particles deposited in the respiratory system and different fractions of particulate matter were also measured by means of a Nanoparticle Surface Area Monitor (NSAM) and a photometer (DustTrak DRX), respectively. In this way, supplementary data were obtained during the monitoring of trends inside the pizzerias. We found that surface area and PM1 particle concentrations in pizzerias can be very high, especially when compared to other critical microenvironments, such as the transport hubs. During pizza cooking under normal ventilation conditions, concentrations were found up to 74, 70 and 23 times higher than background levels for number, surface area and PM1, respectively. A key parameter is the oven shape factor, defined as the ratio between the size of the face opening in respect
Resumo:
Concentrations of ultrafine (<0.1µm) particles (UFPs) and PM2.5 (<2.5µm) were measured whilst commuting along a similar route by train, bus, ferry and automobile in Sydney, Australia. One trip on each transport mode was undertaken during both morning and evening peak hours throughout a working week, for a total of 40 trips. Analyses comprised one-way ANOVA to compare overall (i.e. all trips combined) geometric mean concentrations of both particle fractions measured across transport modes, and assessment of both the correlation between wind speed and individual trip means of UFPs and PM2.5, and the correlation between the two particle fractions. Overall geometric mean concentrations of UFPs and PM2.5 ranged from 2.8 (train) to 8.4 (bus) × 104 particles cm-3 and 22.6 (automobile) to 29.6 (bus) µg m-3, respectively, and a statistically significant difference (p <0.001) between modes was found for both particle fractions. Individual trip geometric mean concentrations were between 9.7 × 103 (train) and 2.2 × 105 (bus) particles cm-3 and 9.5 (train) to 78.7 (train) µg m-3. Estimated commuter exposures were variable, and the highest return trip mean PM2.5 exposure occurred in the ferry mode, whilst the highest UFP exposure occurred during bus trips. The correlation between fractions was generally poor, and in keeping with the duality of particle mass and number emissions in vehicle-dominated urban areas. Wind speed was negatively correlated with, and a generally poor determinant of, UFP and PM2.5 concentrations, suggesting a more significant role for other factors in determining commuter exposure.
Resumo:
Several studies have developed metrics for software quality attributes of object-oriented designs such as reusability and functionality. However, metrics which measure the quality attribute of information security have received little attention. Moreover, existing security metrics measure either the system from a high level (i.e. the whole system’s level) or from a low level (i.e. the program code’s level). These approaches make it hard and expensive to discover and fix vulnerabilities caused by software design errors. In this work, we focus on the design of an object-oriented application and define a number of information security metrics derivable from a program’s design artifacts. These metrics allow software designers to discover and fix security vulnerabilities at an early stage, and help compare the potential security of various alternative designs. In particular, we present security metrics based on composition, coupling, extensibility, inheritance, and the design size of a given object-oriented, multi-class program from the point of view of potential information flow.
Resumo:
Refactoring focuses on improving the reusability, maintainability and performance of programs. However, the impact of refactoring on the security of a given program has received little attention. In this work, we focus on the design of object-oriented applications and use metrics to assess the impact of a number of standard refactoring rules on their security by evaluating the metrics before and after refactoring. This assessment tells us which refactoring steps can increase the security level of a given program from the point of view of potential information flow, allowing application designers to improve their system’s security at an early stage.
Resumo:
Transit oriented developments are high density mixed use developments located within short and easily walkable distance of a major transit centre. These developments are often hypothesised as a means of enticing a mode shift from the private car to sustainable transport modes such as, walking, cycling and public transport. However, it is important to gather evidence to test this hypothesis by determining the travel characteristics of transit oriented developments users. For this purpose, travel surveys were conducted for an urban transit oriented development currently under development. This chapter presents the findings from the preliminary data analysis of the travel surveys. Kelvin Grove Urban Village, a mixed use development located in Brisbane, Australia, has been selected as the case for the transit oriented developments study. Travel data for all groups of transit oriented development users ranging from students to shoppers, and residents to employees were collected. Different survey instruments were used for different transit oriented development users to optimise their response rates, and the performance of these survey instruments are stated herein. The travel characteristics of transit oriented development users are reported in this chapter by explaining mode share, trip length distribution, and time of day of trip. The results of the travel survey reveal that Kelvin Grove Urban Village users use more sustainable modes of transport as compared to other Brisbane residents.
Resumo:
Proactive communication management instead of mortification in the glare of hostile media attention became the theme of a four-day training program for multi-cultural community leaders, the object of this research. The program in Brisbane from December 2009 through to February this year was conducted under auspices of a Community Media Link grant program shared by Griffith University and the Queensland Ethnic Communities Council, together with Journalism academics from the Queensland University of Technology. Twenty-eight participants from 23 organisations took part, with a team of nine facilitators from the host organisations, and guest presenters from the news media. This paper reviews the process, taking into account: its objectives, to empower participants by showing how Australian media operate and introducing participants to journalists; pedagogical thrust, where overview talks, with role play seminars with guest presenters from the media, were combined with practice in interviews and writing for media; and outcomes, assessed on the basis of participants’ responses. The research methodology is qualitative, in that the study is based on discussions to review the planning and experience of sessions, and anonymous, informal feed-back questionnaires distributed to the participants. Background literature on multiculturalism and community media was referred to in the study. The findings indicate positive outcomes for participants from this approach to protection of persons unversed in living in the Australian “mediatised” environment. Most affirmed that the “production side” perspective of the exercise had informed and motivated them effectively, such that henceforth they would venture far more into media management, in their community leadership roles.
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
Purpose: The purpose of this empirical paper is to investigate internal marketing from a behavioural perspective. The impact of internal marketing behaviours, operationalised as an internal market orientation (IMO), on employees’ marketing and other in-role behaviours (IRB) were examined. ---------- Design/methodology/approach: Survey data measuring IMO, market orientation and a range of constructs relevant to the nomological network in which they are embedded were collected from the UK retail managers. These were tested to establish their psychometric properties and the conceptual model was analysed using structural equations modelling, employing a partial least squares methodology. ---------- Findings: IMO has positive consequences for employees’ market-oriented and other IRB. These, in turn, influence marketing success. Research limitations/implications – The paper provides empirical support for the long-held assumption that internal and external marketing are related and that organisations should balance their external focus with some attention to employees. Future research could measure the attitudes and behaviours of managers, employees and customers directly and explore the relationships between them. ---------- Practical implications: Firm must ensure that they do not put the needs of their employees second to those of managers and shareholders; managers must develop their listening skills and organisations must become more responsive to the needs of their employees. ---------- Originality/value: The paper contributes to the scarce body of empirical support for the role of internal marketing in services organisations. For researchers, this paper legitimises the study of internal marketing as a route to external market success; for managers, the study provides quantifiable evidence that focusing on employees’ wants and needs impacts their behaviours towards the market.
Resumo:
This study used the Australian Environmental Health Risk Assessment Framework to assess the human health risk of dioxin exposure through foods for local residents in two wards of Bien Hoa City, Vietnam. These wards are known hot-spots for dioxin and a range of stakeholders from central government to local levels were involved in this process. Publications on dioxin characteristics and toxicity were reviewed and dioxin concentrations in local soil, mud, foods, milk and blood samples were used as data for this risk assessment. A food frequency survey of 400 randomly selected households in these wards was conducted to provide data for exposure assessment. Results showed that local residents who had consumed locally cultivated foods, especially fresh water fish and bottom-feeding fish, free-ranging chicken, duck, and beef were at a very high risk, with their daily dioxin intake far exceeding the tolerable daily intake recommended by the WHO. Based on the results of this assessment, a multifaceted risk management program was developed and has been recognized as the first public health program ever to have been implemented in Vietnam to reduce the risks of dioxin exposure at dioxin hot-spots.
Resumo:
In cloud computing resource allocation and scheduling of multiple composite web services is an important challenge. This is especially so in a hybrid cloud where there may be some free resources available from private clouds but some fee-paying resources from public clouds. Meeting this challenge involves two classical computational problems. One is assigning resources to each of the tasks in the composite web service. The other is scheduling the allocated resources when each resource may be used by more than one task and may be needed at different points of time. In addition, we must consider Quality-of-Service issues, such as execution time and running costs. Existing approaches to resource allocation and scheduling in public clouds and grid computing are not applicable to this new problem. This paper presents a random-key genetic algorithm that solves new resource allocation and scheduling problem. Experimental results demonstrate the effectiveness and scalability of the algorithm.
Resumo:
With the emergence of multi-core processors into the mainstream, parallel programming is no longer the specialized domain it once was. There is a growing need for systems to allow programmers to more easily reason about data dependencies and inherent parallelism in general purpose programs. Many of these programs are written in popular imperative programming languages like Java and C]. In this thesis I present a system for reasoning about side-effects of evaluation in an abstract and composable manner that is suitable for use by both programmers and automated tools such as compilers. The goal of developing such a system is to both facilitate the automatic exploitation of the inherent parallelism present in imperative programs and to allow programmers to reason about dependencies which may be limiting the parallelism available for exploitation in their applications. Previous work on languages and type systems for parallel computing has tended to focus on providing the programmer with tools to facilitate the manual parallelization of programs; programmers must decide when and where it is safe to employ parallelism without the assistance of the compiler or other automated tools. None of the existing systems combine abstraction and composition with parallelization and correctness checking to produce a framework which helps both programmers and automated tools to reason about inherent parallelism. In this work I present a system for abstractly reasoning about side-effects and data dependencies in modern, imperative, object-oriented languages using a type and effect system based on ideas from Ownership Types. I have developed sufficient conditions for the safe, automated detection and exploitation of a number task, data and loop parallelism patterns in terms of ownership relationships. To validate my work, I have applied my ideas to the C] version 3.0 language to produce a language extension called Zal. I have implemented a compiler for the Zal language as an extension of the GPC] research compiler as a proof of concept of my system. I have used it to parallelize a number of real-world applications to demonstrate the feasibility of my proposed approach. In addition to this empirical validation, I present an argument for the correctness of the type system and language semantics I have proposed as well as sketches of proofs for the correctness of the sufficient conditions for parallelization proposed.
Resumo:
Pipe insulation between the collector and storage tank on pumped storage (commonly called split), solar water heaters can be subject to high temperatures, with a maximum equal to the collector stagnation temperature. The frequency of occurrence of these temperatures is dependent on many factors including climate, hot water demand, system size and efficiency. This paper outlines the findings of a computer modelling study to quantify the frequency of occurrence of pipe temperatures of 80 degrees Celsius or greater at the outlet of the collectors for these systems. This study will help insulation suppliers determine the suitability of their materials for this application. The TRNSYS program was used to model the performance of a common size of domestic split solar system, using both flat plate and evacuated tube, selective surface collectors. Each system was modelled at a representative city in each of the 6 climate zones for Australia and New Zealand, according to AS/NZS4234 - Heat Water Systems - Calculation of energy consumption, and the ORER RECs calculation method. TRNSYS was used to predict the frequency of occurrence of the temperatures that the pipe insulation would be exposed to over an average year, for hot water consumption patterns specified in AS/NZS4234, and for worst case conditions in each of the climate zones. The results show; * For selectively surfaced, flat plate collectors in the hottest location (Alice Sprints) with a medium size hot water demand according to AS/NZS2434, the annual frequency of occurrence of temperatures at and above 80 degrees Celsius was 33 hours. The frequency of temperatures at and above 140 degrees Celsius was insignificant. * For evacuated tube collectors in the hottest location (Alice Springs), the annual frequency of temperatures at and above 80 degrees Celsius was 50 hours. Temperatures at and above 140 degrees Celsius were significant and were estimated to occur for more than 21 hours per year in this climate zone. Even in Melbourne, temperatures at and above 80 degrees can occur for 12 hours per year and at and above 140 degrees for 5 hours per year. * The worst case identified was for evacuated tube collectors in Alice Springs, with mostly afternoon loads in January. Under these conditions, the frequency of temperatures at and above 80 degrees Celsius was 10 hours for this month only. Temperatures at and above 140 degrees Celsius were predicted to occur for 5 hours in January.
Resumo:
Background: Pregnant women exposed to traffic pollution have an increased risk of negative birth outcomes. We aimed to investigate the size of this risk using a prospective cohort of 970 mothers and newborns in Logan, Queensland. ----- ----- Methods: We examined two measures of traffic: distance to nearest road and number of roads around the home. To examine the effect of distance we used the number of roads around the home in radii from 50 to 500 metres. We examined three road types: freeways, highways and main roads.----- ----- Results: There were no associations with distance to road. A greater number of freeways and main roads around the home were associated with a shorter gestation time. There were no negative impacts on birth weight, birth length or head circumference after adjusting for gestation. The negative effects on gestation were largely due to main roads within 400 metres of the home. For every 10 extra main roads within 400 metres of the home, gestation time was reduced by 1.1% (95% CI: -1.7, -0.5; p-value = 0.001).----- ----- Conclusions: Our results add weight to the association between exposure to traffic and reduced gestation time. This effect may be due to the chemical toxins in traffic pollutants, or because of disturbed sleep due to traffic noise.
Resumo:
We present a hierarchical model for assessing an object-oriented program's security. Security is quantified using structural properties of the program code to identify the ways in which `classified' data values may be transferred between objects. The model begins with a set of low-level security metrics based on traditional design characteristics of object-oriented classes, such as data encapsulation, cohesion and coupling. These metrics are then used to characterise higher-level properties concerning the overall readability and writability of classified data throughout the program. In turn, these metrics are then mapped to well-known security design principles such as `assigning the least privilege' and `reducing the size of the attack surface'. Finally, the entire program's security is summarised as a single security index value. These metrics allow different versions of the same program, or different programs intended to perform the same task, to be compared for their relative security at a number of different abstraction levels. The model is validated via an experiment involving five open source Java programs, using a static analysis tool we have developed to automatically extract the security metrics from compiled Java bytecode.