306 resultados para Treadmill running
Resumo:
Locally available different bbiomass solid wastes, pine seed, date seed, plum seed, nutshell, hay of catkin, rice husk, jute stick, saw-dust, wheat straw and linseed residue in the particle form have been pyrolyzed in laboratory scale fixed bed reactor. The products obtained are pyrolysis oil, solid char and gas. The oil and char are collected while the gas is flared into atmosphere. The variation of oil yield for different biomass feedstock with reaction parameters like, reactor bed temperature, feed size and running time is presented in a comparative way in the paper. A maximum liquid yield of 55 wt% of dry feedstock is obtained at an optimum temperature of 500 °C for a feed size of 300-600 μm with a running time of 55 min with nutshell as the feedstock while the minimum liquid yield is found to be 30 wt% of feedstock at an optimum temperature of 400 °C for a feed size of 2.36 mm with a running time of 65 min for linseed residue. A detailed study on the variation of product yields with reaction parameters is presented for the latest investigation with pine seed as the feedstock where a maximum liquid yield of 40 wt% of dry feedstock is obtained at an optimum temperature of 500 °C for a feed size of 2.36-2.76 mm with a running time of 120 min. The characterization of the pyrolysis oil is carried out and a comparison of some selected properties of the oil is presented. From the study it is exhibited that the biomass solid wastes have the potential to be converted into liquid oil as a source of renewable energy with some further upgrading of the products.
Resumo:
In this study, a tandem LC-MS (Waters Xevo TQ) MRM-based MS method was developed for rapid, broad profiling of hydrophilic metabolites from biological samples, in either positive or negative ion modes without the need for an ion pairing reagent, using a reversed-phase pentafluorophenylpropyl (PFPP) column. The developed method was successfully applied to analyze various biological samples from C57BL/6 mice, including urine, duodenum, liver, plasma, kidney, heart, and skeletal muscle. As result, a total 112 of hydrophilic metabolites were detected within 8 min of running time to obtain a metabolite profile of the biological samples. The analysis of this number of hydrophilic metabolites is significantly faster than previous studies. Classification separation for metabolites from different tissues was globally analyzed by PCA, PLS-DA and HCA biostatistical methods. Overall, most of the hydrophilic metabolites were found to have a "fingerprint" characteristic of tissue dependency. In general, a higher level of most metabolites was found in urine, duodenum, and kidney. Altogether, these results suggest that this method has potential application for targeted metabolomic analyzes of hydrophilic metabolites in a wide ranges of biological samples.
Resumo:
In Australia, and elsewhere, the movement of trains on long-haul rail networks is usually planned in advance. Typically, a train plan is developed to confirm that the required train movements and track maintenance activities can occur. The plan specifies when track segments will be occupied by particular trains and maintenance activities. On the day of operation, a train controller monitors and controls the movement of trains and maintenance crews, and updates the train plan in response to unplanned disruptions. It can be difficult to predict how good a plan will be in practice. The main performance indicator for a train service should be reliability - the proportion of trains running the service that complete at or before the scheduled time. We define the robustness of a planned train service to be the expected reliability. The robustness of individual train services and for a train plan as a whole can be estimated by simulating the train plan many times with random, but realistic, perturbations to train departure times and segment durations, and then analysing the distributions of arrival times. This process can also be used to set arrival times that will achieve a desired level of robustness for each train service.
Resumo:
Generally, the magnitude of pollutant emissions from diesel engines running on biodiesel fuel is ultimately coupled to the structure of respective molecules that constitutes the fuel. Previous studies demonstrated the relationship between organic fraction of PM and its oxidative potential. Herein, emissions from a diesel engine running on different biofuels were analysed in more detail to explore the role different organic fractions play in the measured oxidative potential. In this work, a more detailed chemical analysis of biofuel PM was undertaken using a compact Time of Flight Aerosol Mass Spectrometer (c-ToF AMS). This enabled a better identification of the different organic fractions that contribute to the overall measured oxidative potentials. The concentration of reactive oxygen species (ROS) was measured using a profluorescent nitroxide molecular probe 9-(1,1,3,3-tetramethylisoindolin-2-yloxyl-5-ethynyl)-10-(phenylethynyl)anthracene (BPEAnit). Therefore the oxidative potential of the PM, measured through the ROS content, although proportional to the total organic content in certain cases shows a much higher correlation with the oxygenated organic fraction as measured by the c-ToF AMS. This highlights the importance of knowing the surface chemistry of particles for assessing their health impacts. It also sheds light onto new aspects of particulate emissions that should be taken into account when establishing relevant metrics for assessing health implications of replacing diesel with alternative fuels.
Resumo:
Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.
Resumo:
Every year a number of pedestrians are struck by trains resulting in death and serious injury. While much research has been conducted on train-vehicle collisions, very little is currently known about the aetiology of train-pedestrian collisions. To date, scant research has been undertaken to investigate the demographics of rule breakers, the frequency of deliberate violation versus error making and the influence of the classic deterrence approach on subsequent behaviours. Aim This study aimed to to identify pedestrians’ self-reported reasons for engaging in violations at crossing, the frequency and nature of rule breaking and whether the threat of sanctions influence such events. Method A questionnaire was administered to 511 participants of all ages. Results Analysis revealed that pedestrians (particularly younger groups) were more likely to commit deliberate violations rather than make crossing errors e.g., mistakes. The most frequent reasons given for deliberate violations were participants were running late and did not want to miss their train or participants believed that the gate was taking too long to open so may be malfunctioning. In regards to classical deterrence, an examination of the perceived threat of being apprehended and fined for a crossing violation revealed participants reported the highest mean scores for swiftness of punishment, which suggests they were generally aware that they would receive an “on the spot” fine. However, the overall mean scores for certainty and severity of sanctions (for violating the rules) indicate that the participants did not perceive the certainty and severity of sanctions as very high. This paper will further discuss the research findings in regards to the development of interventions designed to improve pedestrian crossing safety.
Resumo:
"The financial system is a key influencer of the health and efficiency of an economy. The role of the financial system is to gather money from people and businesses that currently have more money than they need and transfer it to those that can use it for either business or consumer expenditures. This flow of funds through financial markets and institutions in the Australian economy is huge (in the billions of dollars), affecting business profits, the rate of inflation, interest rates and the production of goods and services. In general, the larger the flow of funds and the more efficient the financial system, the greater the economic output and welfare in the economy. It is not possible to have a modern, complex economy such as that in Australia, without an efficient and sound financial system. The global financial crisis (GFC) of late 2007–09 (and the ensuing European debt crisis), where the global financial market was on the brink of collapse with only significant government intervention stopping a catastrophic global failure of the market, illustrated the importance of the financial system. Financial Markets, Institutions and Money 3rd edition introduces students to the financial system, its operations, and participants. The text offers a fresh, succinct analysis of the financial markets and discusses how the many participants in the financial system interrelate. This includes coverage of regulators, regulations and the role of the Reserve Bank of Australia, that ensure the system’s smooth running, which is essential to a modern economy. The text has been significantly revised to take into account changes in the financial world."---publisher website Table of Contents 1. The financial system - an overview 2. The Monetary Authorities 3. The Reserve Bank of Australia and interest rates 4. The level of interest rates 5. Mathematics of finance 6. Bond Prices and interest rate risk 7. The Structure of Interest Rates 8. Money Markets 9. Bond Markets 10. Equity Markets
Resumo:
National Australian reviews advocate exploring new models for preservice teacher education. This study investigates the outcomes of the School-Community Integrated Learning (SCIL) pathway as a model for advancing preservice teachers’ understandings of teaching. Thirty-two final-year preservice teachers were surveyed with extended written responses on how the SCIL pathway advanced their understandings of teaching. Results indicated 100% agreement on 6 of the 27 survey items. Indeed, 78% or more preservice teachers agreed that they had a range of experiences across the five categories (i.e., personal-professional skill development, understandings of system requirements, teaching practices, student behaviour and reflective practices). Extended responses suggested they had developed understandings around setting up classrooms, whole school planning processes with professional development, the allocation of teacher responsibilities (e.g., playground duties), parent-teacher interviews, diagnostic testing for literacy and numeracy, commencing running records of students’ assessment results, and the development of relationships (students, teachers and parents). Although a longitudinal study is required to determine long-term effects, the SCIL pathway may be viewed as a positive step towards preparing final-year preservice teachers for their first year as fully-fledged teachers.
Resumo:
The 48 hour game making challenge has been running since 2007. In recent years, we have not only been running a 'game jam' for the local community but we have also been exploring the way in which the event itself and the place of the event has the potential to create its own stories. Game jams are the creative festivals of the game development community and a game jam is very much an event or performance; its stories are those of subjective experience. Participants return year after year and recount personal stories from previous challenges; arrival in the 48hr location typically inspires instances of individual memory and narration more in keeping with those of a music festival or an oft frequented holiday destination. Since its inception, the 48hr has been heavily documented, from the photo-blogging of our first jam and the twitter streams of more recent events to more formal interviews and documentaries (see Anderson, 2012). We have even had our own moments of Gonzo journalism with an on-site press room one year and an ‘embedded’ journalist another year (Keogh, 2011). In the last two years of the 48hr we have started to explore ways and means to collect more abstract data during the event, that is, empirical data about movement and activity. The intent behind this form of data collection was to explore graphic and computer generated visualisations of the event, not for the purpose of formal analysis but in the service of further story telling. [exerpt from truna aka j.turner, Thomas & Owen, 2013) See: truna aka j.turner, Thomas & Owen (2013) Living the indie life: mapping creative teams in a 48 hour game jam and playing with data, Proceedings of the 9th Australasian Conference on Interactive Entertainment, IE'2013, September 30 - October 01 2013, Melbourne, VIC, Australia
Resumo:
The 2 hour game jam was performed as part of the State Library of Queensland 'Garage Gamer' series of events, summer 2013, at the SLQ exhibition. An aspect of the exhibition was the series of 'Level Up' game nights. We hosted the first of these - under the auspices of brIGDA, Game On. It was a party - but the focal point of the event was a live streamed 2 hour game jam. Game jams have become popular amongst the game development and design community in recent years, particularly with the growth of the Global Game Jam, a yearly event which brings thousands of game makers together across different sites in different countries. Other established jams take place on-line, for example the Ludum Dare challenge which as been running since 2002. Other challenges follow the same model in more intimate circumstances and it is now common to find institutions and groups holding their own small local game making jams. There are variations around the format, some jams are more competitive than others for example, but a common aspect is the creation of an intense creative crucible centred around team work and ‘accelerated game development’. Works (games) produced during these intense events often display more experimental qualities than those undertaken as commercial projects. In part this is because the typical jam is started with a conceptual design brief, perhaps a single word, or in the case of the specific game jam described in this paper, three words. Teams have to envision the challenge key word/s as a game design using whatever skills and technologies they can and produce a finished working game in the time given. Game jams thus provide design researchers with extraordinary fodder and recent years have also seen a number of projects which seek to illuminate the design process as seen in these events. For example, Gaydos, Harris and Martinez discuss the opportunity of the jam to expose students to principles of design process and design spaces (2011). Rouse muses on the game jam ‘as radical practice’ and a ‘corrective to game creation as it is normally practiced’. His observations about his own experience in a jam emphasise the same artistic endeavour forefronted earlier, where the experience is about creation that is divorced from the instrumental motivations of commercial game design (Rouse 2011) and where the focus is on process over product. Other participants remark on the social milieu of the event as a critical factor and the collaborative opportunity as a rich site to engage participants in design processes (Shin et al, 2012). Shin et al are particularly interested in the notion of the site of the process and the ramifications of participants being in the same location. They applaud the more localized event where there is an emphasis on local participation and collaboration. For other commentators, it is specifically the social experience in the place of the jam is the most important aspect (See Keogh 2011), not the material site but rather the physical embodied experience of ‘being there’ and being part of the event. Participants talk about game jams they have attended in a similar manner to those observations made by Dourish where the experience is layered on top of the physical space of the event (Dourish 2006). It is as if the event has taken on qualities of place where we find echoes of Tuan’s description of a particular site having an aura of history that makes it a very different place, redolent and evocative (Tuan 1977). The 2 hour game jam held during the SLQ Garage Gamer program was all about social experience.
Resumo:
This paper presents a new framework for distributed intrusion detection based on taint marking. Our system tracks information flows between applications of multiple hosts gathered in groups (i.e., sets of hosts sharing the same distributed information flow policy) by attaching taint labels to system objects such as files, sockets, Inter Process Communication (IPC) abstractions, and memory mappings. Labels are carried over the network by tainting network packets. A distributed information flow policy is defined for each group at the host level by labeling information and defining how users and applications can legally access, alter or transfer information towards other trusted or untrusted hosts. As opposed to existing approaches, where information is most often represented by two security levels (low/high, public/private, etc.), our model identifies each piece of information within a distributed system, and defines their legal interaction in a fine-grained manner. Hosts store and exchange security labels in a peer to peer fashion, and there is no central monitor. Our IDS is implemented in the Linux kernel as a Linux Security Module (LSM) and runs standard software on commodity hardware with no required modification. The only trusted code is our modified operating system kernel. We finally present a scenario of intrusion in a web service running on multiple hosts, and show how our distributed IDS is able to report security violations at each host level.
Resumo:
Growing up, my family worshipped at the altar of unionism. My parents embraced ‘working class’ as an active social position not as a step on the aspirational treadmill. In those days and in the areas where I lived, it was nothing special. It was a given that everyone was in a union and voted Labor, manning factories and building sites and marching or striking when the need arose...
Resumo:
Re-programming of gene expression is fundamental for skeletal muscle adaptations in response to endurance exercise. This study investigated the time-course dependent changes in the muscular transcriptome following an endurance exercise trial consisting of 1 h of intense cycling immediately followed by 1 h of intense running. Skeletal muscle samples were taken at baseline, 3 h, 48 h, and 96 h post-exercise from eight healthy, endurance-trained, male individuals. RNA was extracted from muscle. Differential gene expression was evaluated using Illumina microarrays and validated with qPCR. Gene set enrichment analysis identified enriched molecular signatures chosen from the Molecular Signatures Database. Three h post-exercise, 102 gene sets were up-regulated [family wise error rate (FWER), P < 0.05]; including groups of genes related with leukocyte migration, immune and chaperone activation, and cyclic AMP responsive element binding protein (CREB) 1-signaling. Forty-eight h post-exercise, among 19 enriched gene sets (FWER, P < 0.05), two gene sets related to actin cytoskeleton remodeling were up-regulated. Ninety-six h post-exercise, 83 gene sets were enriched (FWER, P < 0.05), 80 of which were up-regulated; including gene groups related to chemokine signaling, cell stress management, and extracellular matrix remodeling. These data provide comprehensive insights into the molecular pathways involved in acute stress, recovery, and adaptive muscular responses to endurance exercise. The novel 96 h post-exercise transcriptome indicates substantial transcriptional activity, potentially associated with the prolonged presence of leukocytes in the muscles. This suggests that muscular recovery, from a transcriptional perspective, is incomplete 96 h after endurance exercise involving muscle damage.
Resumo:
Abstract BACKGROUND: An examination of melanoma incidence according to anatomical region may be one method of monitoring the impact of public health initiatives. OBJECTIVES: To examine melanoma incidence trends by body site, sex and age at diagnosis or body site and morphology in a population at high risk. MATERIALS AND METHODS: Population-based data on invasive melanoma cases (n = 51473) diagnosed between 1982 and 2008 were extracted from the Queensland Cancer Registry. Age-standardized incidence rates were calculated using the direct method (2000 world standard population) and joinpoint regression models were used to fit trend lines. RESULTS: Significantly decreasing trends for melanomas on the trunk and upper limbs/shoulders were observed during recent years for both sexes under the age of 40 years and among males aged 40-59years. However, in the 60 and over age group, the incidence of melanoma is continuing to increase at all sites (apart from the trunk) for males and on the scalp/neck and upper limbs/shoulders for females. Rates of nodular melanoma are currently decreasing on the trunk and lower limbs. In contrast, superficial spreading melanoma is significantly increasing on the scalp/neck and lower limbs, along with substantial increases in lentigo maligna melanoma since the late 1990s at all sites apart from the lower limbs. CONCLUSIONS: In this large study we have observed significant decreases in rates of invasive melanoma in the younger age groups on less frequently exposed body sites. These results may provide some indirect evidence of the impact of long-running primary prevention campaigns.
Resumo:
Recent modelling of socio-economic costs by the Australian railway industry in 2010 has estimated the cost of level crossing accidents to exceed AU$116 million annually. To better understand causal factors that contribute to these accidents, the Cooperative Research Centre for Rail Innovation is running a project entitled Baseline Level Crossing Video. The project aims to improve the recording of level crossing safety data by developing an intelligent system capable of detecting near-miss incidents and capturing quantitative data around these incidents. To detect near-miss events at railway level crossings a video analytics module is being developed to analyse video footage obtained from forward-facing cameras installed on trains. This paper presents a vision base approach for the detection of these near-miss events. The video analytics module is comprised of object detectors and a rail detection algorithm, allowing the distance between a detected object and the rail to be determined. An existing publicly available Histograms of Oriented Gradients (HOG) based object detector algorithm is used to detect various types of vehicles in each video frame. As vehicles are usually seen from a sideway view from the cabin’s perspective, the results of the vehicle detector are verified using an algorithm that can detect the wheels of each detected vehicle. Rail detection is facilitated using a projective transformation of the video, such that the forward-facing view becomes a bird’s eye view. Line Segment Detector is employed as the feature extractor and a sliding window approach is developed to track a pair of rails. Localisation of the vehicles is done by projecting the results of the vehicle and rail detectors on the ground plane allowing the distance between the vehicle and rail to be calculated. The resultant vehicle positions and distance are logged to a database for further analysis. We present preliminary results regarding the performance of a prototype video analytics module on a data set of videos containing more than 30 different railway level crossings. The video data is captured from a journey of a train that has passed through these level crossings.