277 resultados para Running Lamps.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generally, the magnitude of pollutant emissions from diesel engines running on biodiesel fuel is ultimately coupled to the structure of respective molecules that constitutes the fuel. Previous studies demonstrated the relationship between organic fraction of PM and its oxidative potential. Herein, emissions from a diesel engine running on different biofuels were analysed in more detail to explore the role different organic fractions play in the measured oxidative potential. In this work, a more detailed chemical analysis of biofuel PM was undertaken using a compact Time of Flight Aerosol Mass Spectrometer (c-ToF AMS). This enabled a better identification of the different organic fractions that contribute to the overall measured oxidative potentials. The concentration of reactive oxygen species (ROS) was measured using a profluorescent nitroxide molecular probe 9-(1,1,3,3-tetramethylisoindolin-2-yloxyl-5-ethynyl)-10-(phenylethynyl)anthracene (BPEAnit). Therefore the oxidative potential of the PM, measured through the ROS content, although proportional to the total organic content in certain cases shows a much higher correlation with the oxygenated organic fraction as measured by the c-ToF AMS. This highlights the importance of knowing the surface chemistry of particles for assessing their health impacts. It also sheds light onto new aspects of particulate emissions that should be taken into account when establishing relevant metrics for assessing health implications of replacing diesel with alternative fuels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Every year a number of pedestrians are struck by trains resulting in death and serious injury. While much research has been conducted on train-vehicle collisions, very little is currently known about the aetiology of train-pedestrian collisions. To date, scant research has been undertaken to investigate the demographics of rule breakers, the frequency of deliberate violation versus error making and the influence of the classic deterrence approach on subsequent behaviours. Aim This study aimed to to identify pedestrians’ self-reported reasons for engaging in violations at crossing, the frequency and nature of rule breaking and whether the threat of sanctions influence such events. Method A questionnaire was administered to 511 participants of all ages. Results Analysis revealed that pedestrians (particularly younger groups) were more likely to commit deliberate violations rather than make crossing errors e.g., mistakes. The most frequent reasons given for deliberate violations were participants were running late and did not want to miss their train or participants believed that the gate was taking too long to open so may be malfunctioning. In regards to classical deterrence, an examination of the perceived threat of being apprehended and fined for a crossing violation revealed participants reported the highest mean scores for swiftness of punishment, which suggests they were generally aware that they would receive an “on the spot” fine. However, the overall mean scores for certainty and severity of sanctions (for violating the rules) indicate that the participants did not perceive the certainty and severity of sanctions as very high. This paper will further discuss the research findings in regards to the development of interventions designed to improve pedestrian crossing safety.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"The financial system is a key influencer of the health and efficiency of an economy. The role of the financial system is to gather money from people and businesses that currently have more money than they need and transfer it to those that can use it for either business or consumer expenditures. This flow of funds through financial markets and institutions in the Australian economy is huge (in the billions of dollars), affecting business profits, the rate of inflation, interest rates and the production of goods and services. In general, the larger the flow of funds and the more efficient the financial system, the greater the economic output and welfare in the economy. It is not possible to have a modern, complex economy such as that in Australia, without an efficient and sound financial system. The global financial crisis (GFC) of late 2007–09 (and the ensuing European debt crisis), where the global financial market was on the brink of collapse with only significant government intervention stopping a catastrophic global failure of the market, illustrated the importance of the financial system. Financial Markets, Institutions and Money 3rd edition introduces students to the financial system, its operations, and participants. The text offers a fresh, succinct analysis of the financial markets and discusses how the many participants in the financial system interrelate. This includes coverage of regulators, regulations and the role of the Reserve Bank of Australia, that ensure the system’s smooth running, which is essential to a modern economy. The text has been significantly revised to take into account changes in the financial world."---publisher website Table of Contents 1. The financial system - an overview 2. The Monetary Authorities 3. The Reserve Bank of Australia and interest rates 4. The level of interest rates 5. Mathematics of finance 6. Bond Prices and interest rate risk 7. The Structure of Interest Rates 8. Money Markets 9. Bond Markets 10. Equity Markets

Relevância:

10.00% 10.00%

Publicador:

Resumo:

National Australian reviews advocate exploring new models for preservice teacher education. This study investigates the outcomes of the School-Community Integrated Learning (SCIL) pathway as a model for advancing preservice teachers’ understandings of teaching. Thirty-two final-year preservice teachers were surveyed with extended written responses on how the SCIL pathway advanced their understandings of teaching. Results indicated 100% agreement on 6 of the 27 survey items. Indeed, 78% or more preservice teachers agreed that they had a range of experiences across the five categories (i.e., personal-professional skill development, understandings of system requirements, teaching practices, student behaviour and reflective practices). Extended responses suggested they had developed understandings around setting up classrooms, whole school planning processes with professional development, the allocation of teacher responsibilities (e.g., playground duties), parent-teacher interviews, diagnostic testing for literacy and numeracy, commencing running records of students’ assessment results, and the development of relationships (students, teachers and parents). Although a longitudinal study is required to determine long-term effects, the SCIL pathway may be viewed as a positive step towards preparing final-year preservice teachers for their first year as fully-fledged teachers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 48 hour game making challenge has been running since 2007. In recent years, we have not only been running a 'game jam' for the local community but we have also been exploring the way in which the event itself and the place of the event has the potential to create its own stories. Game jams are the creative festivals of the game development community and a game jam is very much an event or performance; its stories are those of subjective experience. Participants return year after year and recount personal stories from previous challenges; arrival in the 48hr location typically inspires instances of individual memory and narration more in keeping with those of a music festival or an oft frequented holiday destination. Since its inception, the 48hr has been heavily documented, from the photo-blogging of our first jam and the twitter streams of more recent events to more formal interviews and documentaries (see Anderson, 2012). We have even had our own moments of Gonzo journalism with an on-site press room one year and an ‘embedded’ journalist another year (Keogh, 2011). In the last two years of the 48hr we have started to explore ways and means to collect more abstract data during the event, that is, empirical data about movement and activity. The intent behind this form of data collection was to explore graphic and computer generated visualisations of the event, not for the purpose of formal analysis but in the service of further story telling. [exerpt from truna aka j.turner, Thomas & Owen, 2013) See: truna aka j.turner, Thomas & Owen (2013) Living the indie life: mapping creative teams in a 48 hour game jam and playing with data, Proceedings of the 9th Australasian Conference on Interactive Entertainment, IE'2013, September 30 - October 01 2013, Melbourne, VIC, Australia

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 2 hour game jam was performed as part of the State Library of Queensland 'Garage Gamer' series of events, summer 2013, at the SLQ exhibition. An aspect of the exhibition was the series of 'Level Up' game nights. We hosted the first of these - under the auspices of brIGDA, Game On. It was a party - but the focal point of the event was a live streamed 2 hour game jam. Game jams have become popular amongst the game development and design community in recent years, particularly with the growth of the Global Game Jam, a yearly event which brings thousands of game makers together across different sites in different countries. Other established jams take place on-line, for example the Ludum Dare challenge which as been running since 2002. Other challenges follow the same model in more intimate circumstances and it is now common to find institutions and groups holding their own small local game making jams. There are variations around the format, some jams are more competitive than others for example, but a common aspect is the creation of an intense creative crucible centred around team work and ‘accelerated game development’. Works (games) produced during these intense events often display more experimental qualities than those undertaken as commercial projects. In part this is because the typical jam is started with a conceptual design brief, perhaps a single word, or in the case of the specific game jam described in this paper, three words. Teams have to envision the challenge key word/s as a game design using whatever skills and technologies they can and produce a finished working game in the time given. Game jams thus provide design researchers with extraordinary fodder and recent years have also seen a number of projects which seek to illuminate the design process as seen in these events. For example, Gaydos, Harris and Martinez discuss the opportunity of the jam to expose students to principles of design process and design spaces (2011). Rouse muses on the game jam ‘as radical practice’ and a ‘corrective to game creation as it is normally practiced’. His observations about his own experience in a jam emphasise the same artistic endeavour forefronted earlier, where the experience is about creation that is divorced from the instrumental motivations of commercial game design (Rouse 2011) and where the focus is on process over product. Other participants remark on the social milieu of the event as a critical factor and the collaborative opportunity as a rich site to engage participants in design processes (Shin et al, 2012). Shin et al are particularly interested in the notion of the site of the process and the ramifications of participants being in the same location. They applaud the more localized event where there is an emphasis on local participation and collaboration. For other commentators, it is specifically the social experience in the place of the jam is the most important aspect (See Keogh 2011), not the material site but rather the physical embodied experience of ‘being there’ and being part of the event. Participants talk about game jams they have attended in a similar manner to those observations made by Dourish where the experience is layered on top of the physical space of the event (Dourish 2006). It is as if the event has taken on qualities of place where we find echoes of Tuan’s description of a particular site having an aura of history that makes it a very different place, redolent and evocative (Tuan 1977). The 2 hour game jam held during the SLQ Garage Gamer program was all about social experience.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a new framework for distributed intrusion detection based on taint marking. Our system tracks information flows between applications of multiple hosts gathered in groups (i.e., sets of hosts sharing the same distributed information flow policy) by attaching taint labels to system objects such as files, sockets, Inter Process Communication (IPC) abstractions, and memory mappings. Labels are carried over the network by tainting network packets. A distributed information flow policy is defined for each group at the host level by labeling information and defining how users and applications can legally access, alter or transfer information towards other trusted or untrusted hosts. As opposed to existing approaches, where information is most often represented by two security levels (low/high, public/private, etc.), our model identifies each piece of information within a distributed system, and defines their legal interaction in a fine-grained manner. Hosts store and exchange security labels in a peer to peer fashion, and there is no central monitor. Our IDS is implemented in the Linux kernel as a Linux Security Module (LSM) and runs standard software on commodity hardware with no required modification. The only trusted code is our modified operating system kernel. We finally present a scenario of intrusion in a web service running on multiple hosts, and show how our distributed IDS is able to report security violations at each host level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Re-programming of gene expression is fundamental for skeletal muscle adaptations in response to endurance exercise. This study investigated the time-course dependent changes in the muscular transcriptome following an endurance exercise trial consisting of 1 h of intense cycling immediately followed by 1 h of intense running. Skeletal muscle samples were taken at baseline, 3 h, 48 h, and 96 h post-exercise from eight healthy, endurance-trained, male individuals. RNA was extracted from muscle. Differential gene expression was evaluated using Illumina microarrays and validated with qPCR. Gene set enrichment analysis identified enriched molecular signatures chosen from the Molecular Signatures Database. Three h post-exercise, 102 gene sets were up-regulated [family wise error rate (FWER), P < 0.05]; including groups of genes related with leukocyte migration, immune and chaperone activation, and cyclic AMP responsive element binding protein (CREB) 1-signaling. Forty-eight h post-exercise, among 19 enriched gene sets (FWER, P < 0.05), two gene sets related to actin cytoskeleton remodeling were up-regulated. Ninety-six h post-exercise, 83 gene sets were enriched (FWER, P < 0.05), 80 of which were up-regulated; including gene groups related to chemokine signaling, cell stress management, and extracellular matrix remodeling. These data provide comprehensive insights into the molecular pathways involved in acute stress, recovery, and adaptive muscular responses to endurance exercise. The novel 96 h post-exercise transcriptome indicates substantial transcriptional activity, potentially associated with the prolonged presence of leukocytes in the muscles. This suggests that muscular recovery, from a transcriptional perspective, is incomplete 96 h after endurance exercise involving muscle damage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract BACKGROUND: An examination of melanoma incidence according to anatomical region may be one method of monitoring the impact of public health initiatives. OBJECTIVES:   To examine melanoma incidence trends by body site, sex and age at diagnosis or body site and morphology in a population at high risk. MATERIALS AND METHODS:   Population-based data on invasive melanoma cases (n = 51473) diagnosed between 1982 and 2008 were extracted from the Queensland Cancer Registry. Age-standardized incidence rates were calculated using the direct method (2000 world standard population) and joinpoint regression models were used to fit trend lines. RESULTS:   Significantly decreasing trends for melanomas on the trunk and upper limbs/shoulders were observed during recent years for both sexes under the age of 40 years and among males aged 40-59years. However, in the 60 and over age group, the incidence of melanoma is continuing to increase at all sites (apart from the trunk) for males and on the scalp/neck and upper limbs/shoulders for females. Rates of nodular melanoma are currently decreasing on the trunk and lower limbs. In contrast, superficial spreading melanoma is significantly increasing on the scalp/neck and lower limbs, along with substantial increases in lentigo maligna melanoma since the late 1990s at all sites apart from the lower limbs. CONCLUSIONS:   In this large study we have observed significant decreases in rates of invasive melanoma in the younger age groups on less frequently exposed body sites. These results may provide some indirect evidence of the impact of long-running primary prevention campaigns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent modelling of socio-economic costs by the Australian railway industry in 2010 has estimated the cost of level crossing accidents to exceed AU$116 million annually. To better understand causal factors that contribute to these accidents, the Cooperative Research Centre for Rail Innovation is running a project entitled Baseline Level Crossing Video. The project aims to improve the recording of level crossing safety data by developing an intelligent system capable of detecting near-miss incidents and capturing quantitative data around these incidents. To detect near-miss events at railway level crossings a video analytics module is being developed to analyse video footage obtained from forward-facing cameras installed on trains. This paper presents a vision base approach for the detection of these near-miss events. The video analytics module is comprised of object detectors and a rail detection algorithm, allowing the distance between a detected object and the rail to be determined. An existing publicly available Histograms of Oriented Gradients (HOG) based object detector algorithm is used to detect various types of vehicles in each video frame. As vehicles are usually seen from a sideway view from the cabin’s perspective, the results of the vehicle detector are verified using an algorithm that can detect the wheels of each detected vehicle. Rail detection is facilitated using a projective transformation of the video, such that the forward-facing view becomes a bird’s eye view. Line Segment Detector is employed as the feature extractor and a sliding window approach is developed to track a pair of rails. Localisation of the vehicles is done by projecting the results of the vehicle and rail detectors on the ground plane allowing the distance between the vehicle and rail to be calculated. The resultant vehicle positions and distance are logged to a database for further analysis. We present preliminary results regarding the performance of a prototype video analytics module on a data set of videos containing more than 30 different railway level crossings. The video data is captured from a journey of a train that has passed through these level crossings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using our porcine model of deep dermal partial thickness burn injury, various cooling techniques (15 degrees C running water, 2 degrees C running water, ice) of first aid were applied for 20 minutes compared with a control (ambient temperature). The subdermal temperatures were monitored during the treatment and wounds observed and photographed weekly for 6 weeks, observing reepithelialization, wound surface area and cosmetic appearance. Tissue histology and scar tensile strength were examined 6 weeks after burn. The 2 degrees C and ice treatments decreased the subdermal temperature the fastest and lowest, however, generally the 15 and 2 degrees C treated wounds had better outcomes in terms of reepithelialization, scar histology, and scar appearance. These findings provide evidence to support the current first aid guidelines of cold tap water (approximately 15 degrees C) for 20 minutes as being beneficial in helping to heal the burn wound. Colder water at 2 degrees C is also beneficial. Ice should not be used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using our porcine model of deep dermal partial thickness burn injury, various durations (10min, 20min, 30min or 1h) and delays (immediate, 10min, 1h, 3h) of 15 degrees C running water first aid were applied to burns and compared to untreated controls. The subdermal temperatures were monitored during the treatment and wounds observed weekly for 6 weeks, for re-epithelialisation, wound surface area and cosmetic appearance. At 6 weeks after the burn, tissue biopsies were taken of the scar for histological analysis. Results showed that immediate application of cold running water for 20min duration is associated with an improvement in re-epithelialisation over the first 2 weeks post-burn and decreased scar tissue at 6 weeks. First aid application of cold water for as little as 10min duration or up to 1h delay still provides benefit.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Throughout history there have been many different and sometimes bizarre treatments prescribed for burns. Unfortunately many of these treatments still persist today, although they often do not have sufficient evidence to support their use. This paper reviews common first aid and pre-hospital treatments for burns (water--cold or warm, ice, oils, powders and natural plant therapies), possible mechanisms whereby they might work and the literature which supports their use. From the published work to date, the current recommendations for the first aid treatment of burn injuries should be to use cold running tap water (between 2 and 15 degrees C) on the burn, not ice or alternative plant therapies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.