905 resultados para Running Kinematics
Resumo:
Neutrophils serve as an intriguing model for the study of innate immune cellular activity induced by physiological stress. We measured changes in the transcriptome of circulating neutrophils following an experimental exercise trial (EXTRI) consisting of 1 h of intense cycling immediately followed by 1 h of intense running. Blood samples were taken at baseline, 3 h, 48 h, and 96 h post-EXTRI from eight healthy, endurance-trained, male subjects. RNA was extracted from isolated neutrophils. Differential gene expression was evaluated using Illumina microarrays and validated with quantitative PCR. Gene set enrichment analysis identified enriched molecular signatures chosen from the Molecular Signatures Database. Blood concentrations of muscle damage indexes, neutrophils, interleukin (IL)-6 and IL-10 were increased (P < 0.05) 3 h post-EXTRI. Upregulated groups of functionally related genes 3 h post-EXTRI included gene sets associated with the recognition of tissue damage, the IL-1 receptor, and Toll-like receptor (TLR) pathways (familywise error rate, P value < 0.05). The core enrichment for these pathways included TLRs, low-affinity immunoglobulin receptors, S100 calcium binding protein A12, and negative regulators of innate immunity, e.g., IL-1 receptor antagonist, and IL-1 receptor associated kinase-3. Plasma myoglobin changes correlated with neutrophil TLR4 gene expression (r = 0.74; P < 0.05). Neutrophils had returned to their nonactivated state 48 h post-EXTRI, indicating that their initial proinflammatory response was transient and rapidly counterregulated. This study provides novel insight into the signaling mechanisms underlying the neutrophil responses to endurance exercise, suggesting that their transcriptional activity was particularly induced by damage-associated molecule patterns, hypothetically originating from the leakage of muscle components into the circulation.
Resumo:
Electricity cost has become a major expense for running data centers and server consolidation using virtualization technology has been used as an important technology to improve the energy efficiency of data centers. In this research, a genetic algorithm and a simulation-annealing algorithm are proposed for the static virtual machine placement problem that considers the energy consumption in both the servers and the communication network, and a trading algorithm is proposed for dynamic virtual machine placement. Experimental results have shown that the proposed methods are more energy efficient than existing solutions.
Resumo:
The reliability analysis is crucial to reducing unexpected down time, severe failures and ever tightened maintenance budget of engineering assets. Hazard based reliability methods are of particular interest as hazard reflects the current health status of engineering assets and their imminent failure risks. Most existing hazard models were constructed using the statistical methods. However, these methods were established largely based on two assumptions: one is the assumption of baseline failure distributions being accurate to the population concerned and the other is the assumption of effects of covariates on hazards. These two assumptions may be difficult to achieve and therefore compromise the effectiveness of hazard models in the application. To address this issue, a non-linear hazard modelling approach is developed in this research using neural networks (NNs), resulting in neural network hazard models (NNHMs), to deal with limitations due to the two assumptions for statistical models. With the success of failure prevention effort, less failure history becomes available for reliability analysis. Involving condition data or covariates is a natural solution to this challenge. A critical issue for involving covariates in reliability analysis is that complete and consistent covariate data are often unavailable in reality due to inconsistent measuring frequencies of multiple covariates, sensor failure, and sparse intrusive measurements. This problem has not been studied adequately in current reliability applications. This research thus investigates such incomplete covariates problem in reliability analysis. Typical approaches to handling incomplete covariates have been studied to investigate their performance and effects on the reliability analysis results. Since these existing approaches could underestimate the variance in regressions and introduce extra uncertainties to reliability analysis, the developed NNHMs are extended to include handling incomplete covariates as an integral part. The extended versions of NNHMs have been validated using simulated bearing data and real data from a liquefied natural gas pump. The results demonstrate the new approach outperforms the typical incomplete covariates handling approaches. Another problem in reliability analysis is that future covariates of engineering assets are generally unavailable. In existing practices for multi-step reliability analysis, historical covariates were used to estimate the future covariates. Covariates of engineering assets, however, are often subject to substantial fluctuation due to the influence of both engineering degradation and changes in environmental settings. The commonly used covariate extrapolation methods thus would not be suitable because of the error accumulation and uncertainty propagation. To overcome this difficulty, instead of directly extrapolating covariate values, projection of covariate states is conducted in this research. The estimated covariate states and unknown covariate values in future running steps of assets constitute an incomplete covariate set which is then analysed by the extended NNHMs. A new assessment function is also proposed to evaluate risks of underestimated and overestimated reliability analysis results. A case study using field data from a paper and pulp mill has been conducted and it demonstrates that this new multi-step reliability analysis procedure is able to generate more accurate analysis results.
Resumo:
Background: Few longitudinal studies have examined the mental health outcomes of women after abortion and the results are controversial. Despite falling birth rates, teenage pregnancies remain high and over half (53%) of teenage and a third (36%) of young adult (20_24 years) pregnancies are aborted. Recent findings from a NewZealand longitudinal birth cohort linked abortion and subsequent psychiatric disorders in young women. Limited Australian data is available examining this association. Methods: Data were taken from the Mater-University Study of Pregnancy (MUSP). Running since 1981, this is a prospective birth cohort study of 7223 mothers and children. At the 21-year follow-up 3775 (52.3% of the original cohort) participants were surveyed, of these 1132 young women had complete data on pregnancy outcomes and psychiatric diagnoses from a structured interview. Binary logistic regression examined the association between five lifetime psychiatric disorders (nicotine, alcohol, cannabis, affective and anxiety disorders) and ever having an abortion or birth. Analyses adjusted for age, concurrent and maternal sociodemographic factors, and factors related to adolescent behaviour, previous mental health and family functioning. Results: A quarter of the young women (n_261) reported at least one pregnancy and 32.6% had an abortion. Abortion was significantly associated with age-adjusted OR for all the lifetime disorders. After full adjustment abortion remained significantly associated with nicotine (OR_2.1, 1.2_3.6) and alcohol disorders (OR_2.0, 1.3_3.3). Conclusion: The findings suggest that abortion in young women is independently associated with an increased risk of nicotine and alcohol disorders.
Resumo:
The invention relates to a method for monitoring user activity on a mobile device, comprising an input and an output unit, comprising the following steps preferably in the following order: detecting and / or logging user activity on said input unit, identifying a foreground running application, hashing of a user-interface-element management list of the foreground running application, and creating a screenshot comprising items displayed on said input unit. The invention also relates to a method for analyzing user activity at a server, comprising the following step: obtaining at least one of an information about detected and / or logged user activity, an information about a foreground running application, a hashed user-interface-element management list and a screenshot from a mobile device. Further, a computer program product is provided, comprising one or more computer readable media having computer executable instructions for performing the steps of at least one of the aforementioned methods.
Resumo:
XRD (X-ray diffraction), XRF (X-ray fluorescence), TG (thermogravimetry), FT-IES (Fourier transform infrared emission spectroscopy), FESEM (field emission scanning electron microscope), TEM (transmission electron microscope) and nitrogen–adsorption–desorption analysis were used to characterize the composition and thermal evolution of the structure of natural goethite. The in situ FT-IES demonstrated the start temperature (250 °C) of the transformation of natural goethite to hematite and the thermodynamic stability of protohematite between 250 and 600 °C. The heated products showed a topotactic relationship to the original mineral based on SEM analysis. Finally, the nitrogen–adsorption–desorption isotherm provided the variation of surface area and pore size distribution as a function of temperature. The surface area displayed a remarkable increase up to 350 °C, and then decreased above this temperature. The significant increase in surface area was attributed to the formation of regularly arranged slit-shaped micropores running parallel to elongated direction of hematite microcrystal. The main pore size varied from 0.99 nm to 3.5 nm when heating temperature increases from 300 to 400 °C. The hematite derived from heating goethite possesses high surface area and favors the possible application of hematite as an adsorbent as well as catalyst carrier.
Resumo:
It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade leading to an array of approaches to business process variability modeling. This survey examines existing approaches in this field based on a common set of criteria and illustrates their key concepts using a running example. The analysis shows that existing approaches are characterized by the fact that they extend a conventional process mod- eling language with constructs that make it able to capture customizable process models. A customizable process model represents a family of process variants in a way that each variant can be derived by adding or deleting fragments according to configuration parameters or according to a domain model. The survey puts into evidence an abundance of customizable process modeling languages, embodying a diverse set of con- structs. In contrast, there is comparatively little tool support for analyzing and constructing customizable process models, as well as a scarcity of empirical evaluations of languages in the field.
Resumo:
The findings presented in this paper are part of a research project designed to provide a preliminary indication of the support needs of postdiagnosis women with breast cancer in remote and isolated areas in Queensland. This discussion will present data that focuses on the women’s expressed personal concerns. For participants in this research a diagnosis of breast cancer involves a confrontation with their own mortality and the possibility of a reduced life span. This is a definite life crisis, creating shock and needing considerable adjustment. Along with these generic issues the participants also articulated significant issues in relation to their experience as women in a rural setting. These concerns centred around worries about how their partner and families cope during their absences for treatment, the additional burden on the family of having to cope with running the property or farm during the participant’s absence or illness, added financial strain brought about by the cost of travel for treatment, maintenance of properties during absences, and problems created by time off from properties or self-employment. These findings accord with other reports of health and welfare services for rural Australian and the generic literature on psycho-oncology studies of breast cancer.
Resumo:
Cross-Lingual Link Discovery (CLLD) is a new problem in Information Retrieval. The aim is to automatically identify meaningful and relevant hypertext links between documents in different languages. This is particularly helpful in knowledge discovery if a multi-lingual knowledge base is sparse in one language or another, or the topical coverage in each language is different; such is the case with Wikipedia. Techniques for identifying new and topically relevant cross-lingual links are a current topic of interest at NTCIR where the CrossLink task has been running since the 2011 NTCIR-9. This paper presents the evaluation framework for benchmarking algorithms for cross-lingual link discovery evaluated in the context of NTCIR-9. This framework includes topics, document collections, assessments, metrics, and a toolkit for pooling, assessment, and evaluation. The assessments are further divided into two separate sets: manual assessments performed by human assessors; and automatic assessments based on links extracted from Wikipedia itself. Using this framework we show that manual assessment is more robust than automatic assessment in the context of cross-lingual link discovery.
Resumo:
Locally available different bbiomass solid wastes, pine seed, date seed, plum seed, nutshell, hay of catkin, rice husk, jute stick, saw-dust, wheat straw and linseed residue in the particle form have been pyrolyzed in laboratory scale fixed bed reactor. The products obtained are pyrolysis oil, solid char and gas. The oil and char are collected while the gas is flared into atmosphere. The variation of oil yield for different biomass feedstock with reaction parameters like, reactor bed temperature, feed size and running time is presented in a comparative way in the paper. A maximum liquid yield of 55 wt% of dry feedstock is obtained at an optimum temperature of 500 °C for a feed size of 300-600 μm with a running time of 55 min with nutshell as the feedstock while the minimum liquid yield is found to be 30 wt% of feedstock at an optimum temperature of 400 °C for a feed size of 2.36 mm with a running time of 65 min for linseed residue. A detailed study on the variation of product yields with reaction parameters is presented for the latest investigation with pine seed as the feedstock where a maximum liquid yield of 40 wt% of dry feedstock is obtained at an optimum temperature of 500 °C for a feed size of 2.36-2.76 mm with a running time of 120 min. The characterization of the pyrolysis oil is carried out and a comparison of some selected properties of the oil is presented. From the study it is exhibited that the biomass solid wastes have the potential to be converted into liquid oil as a source of renewable energy with some further upgrading of the products.
Resumo:
In this study, a tandem LC-MS (Waters Xevo TQ) MRM-based MS method was developed for rapid, broad profiling of hydrophilic metabolites from biological samples, in either positive or negative ion modes without the need for an ion pairing reagent, using a reversed-phase pentafluorophenylpropyl (PFPP) column. The developed method was successfully applied to analyze various biological samples from C57BL/6 mice, including urine, duodenum, liver, plasma, kidney, heart, and skeletal muscle. As result, a total 112 of hydrophilic metabolites were detected within 8 min of running time to obtain a metabolite profile of the biological samples. The analysis of this number of hydrophilic metabolites is significantly faster than previous studies. Classification separation for metabolites from different tissues was globally analyzed by PCA, PLS-DA and HCA biostatistical methods. Overall, most of the hydrophilic metabolites were found to have a "fingerprint" characteristic of tissue dependency. In general, a higher level of most metabolites was found in urine, duodenum, and kidney. Altogether, these results suggest that this method has potential application for targeted metabolomic analyzes of hydrophilic metabolites in a wide ranges of biological samples.
Resumo:
In Australia, and elsewhere, the movement of trains on long-haul rail networks is usually planned in advance. Typically, a train plan is developed to confirm that the required train movements and track maintenance activities can occur. The plan specifies when track segments will be occupied by particular trains and maintenance activities. On the day of operation, a train controller monitors and controls the movement of trains and maintenance crews, and updates the train plan in response to unplanned disruptions. It can be difficult to predict how good a plan will be in practice. The main performance indicator for a train service should be reliability - the proportion of trains running the service that complete at or before the scheduled time. We define the robustness of a planned train service to be the expected reliability. The robustness of individual train services and for a train plan as a whole can be estimated by simulating the train plan many times with random, but realistic, perturbations to train departure times and segment durations, and then analysing the distributions of arrival times. This process can also be used to set arrival times that will achieve a desired level of robustness for each train service.
Resumo:
Although Basin and Range–style extension affected large areas of western Mexico after the Late Eocene, most consider that extension in the Gulf of California region began as subduction waned and ended ca. 14–12.5 Ma. A general consensus also exists in considering Early and Middle Miocene volcanism of the Sierra Madre Occidental and Comondú Group as subduction related, whereas volcanism after ca. 12.5 Ma is extension related. Here we present a new regional geologic study of the eastern Gulf of California margin in the states of Nayarit and Sinaloa, Mexico, backed by 43 new Ar-Ar and U-Pb mineral ages, and geochemical data that document an earlier widespread phase of extension. This extension across the southern and central Gulf Extensional Province began between Late Oligocene and Early Miocene time, but was focused in the region of the future Gulf of California in the Middle Miocene. Late Oligocene to Early Miocene rocks across northern Nayarit and southern Sinaloa were affected by major approximately north-south– to north-northwest– striking normal faults prior to ca. 21 Ma. Between ca. 21 and 11 Ma, a system of north-northwest–south-southeast high angle extensional faults continued extending the southwestern side of the Sierra Madre Occidental. Rhyolitic domes, shallow intrusive bodies, and lesser basalts were emplaced along this extensional belt at 20–17 Ma. Rhyolitic rocks, in particular the domes and lavas, often show strong antecrystic inheritance but only a few Mesozoic or older xenocrysts, suggesting silicic magma generation in the mid-upper crust triggered by an extension induced basaltic infl ux. In northern Sinaloa, large grabens were occupied by huge volcanic dome complexes ca. 21–17 Ma and filled by continental sediments with interlayered basalts dated as 15–14 Ma, a stratigraphy and timing very similar to those found in central Sonora (northeastern Gulf of California margin). Early to Middle Miocene volcanism occurred thus in rift basins, and was likely associated with decompression melting of upper mantle (inducing crustal partial melting) rather than with fluxing by fluids from the young and slow subducting microplates. Along the eastern side of the Gulf of California coast, from Farallón de San Ignacio island offshore Los Mochis, Sinaloa, to San Blas, Nayarit, a strike distance of >700 km, flat lying basaltic lavas dated as ca. 11.5–10 Ma are exposed just above the present sea level. Here crustal thickness is almost half that in the unextended core of the adjacent Sierra Madre Occidental, implying signifi cant lithosphere stretching before ca. 11 Ma. This mafic pulse, with subdued Nb-Ta negative spikes, may be related to the detachment of the lower part of the subducted slab, allowing an upward asthenospheric flow into an upper mantle previously modified by fluid fluxes related to past subduction. Widespread eruption of very uniform oceanic island basalt–like lavas occurred by the late Pliocene and Pleistocene, only 20 m.y. after the onset of rifting and ~9 m.y. after the end of subduction, implying that preexisting subduction-modified mantle had now become isolated from melt source regions. Our study shows that rifting across the southern-central Gulf Extensional Province began much earlier than the Late Miocene and provided a fundamental control on the style and composition of volcanism from at least 30 Ma. We envision a sustained period of lithospheric stretching and magmatism during which the pace and breadth of extension changed ca. 20–18 Ma to be narrower, and again after ca. 12.5 Ma, when the kinematics of rifting became more oblique.
Resumo:
Generally, the magnitude of pollutant emissions from diesel engines running on biodiesel fuel is ultimately coupled to the structure of respective molecules that constitutes the fuel. Previous studies demonstrated the relationship between organic fraction of PM and its oxidative potential. Herein, emissions from a diesel engine running on different biofuels were analysed in more detail to explore the role different organic fractions play in the measured oxidative potential. In this work, a more detailed chemical analysis of biofuel PM was undertaken using a compact Time of Flight Aerosol Mass Spectrometer (c-ToF AMS). This enabled a better identification of the different organic fractions that contribute to the overall measured oxidative potentials. The concentration of reactive oxygen species (ROS) was measured using a profluorescent nitroxide molecular probe 9-(1,1,3,3-tetramethylisoindolin-2-yloxyl-5-ethynyl)-10-(phenylethynyl)anthracene (BPEAnit). Therefore the oxidative potential of the PM, measured through the ROS content, although proportional to the total organic content in certain cases shows a much higher correlation with the oxygenated organic fraction as measured by the c-ToF AMS. This highlights the importance of knowing the surface chemistry of particles for assessing their health impacts. It also sheds light onto new aspects of particulate emissions that should be taken into account when establishing relevant metrics for assessing health implications of replacing diesel with alternative fuels.
Resumo:
Cloud computing is an emerging computing paradigm in which IT resources are provided over the Internet as a service to users. One such service offered through the Cloud is Software as a Service or SaaS. SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. SaaS is receiving substantial attention today from both software providers and users. It is also predicted to has positive future markets by analyst firms. This raises new challenges for SaaS providers managing SaaS, especially in large-scale data centres like Cloud. One of the challenges is providing management of Cloud resources for SaaS which guarantees maintaining SaaS performance while optimising resources use. Extensive research on the resource optimisation of Cloud service has not yet addressed the challenges of managing resources for composite SaaS. This research addresses this gap by focusing on three new problems of composite SaaS: placement, clustering and scalability. The overall aim is to develop efficient and scalable mechanisms that facilitate the delivery of high performance composite SaaS for users while optimising the resources used. All three problems are characterised as highly constrained, large-scaled and complex combinatorial optimisation problems. Therefore, evolutionary algorithms are adopted as the main technique in solving these problems. The first research problem refers to how a composite SaaS is placed onto Cloud servers to optimise its performance while satisfying the SaaS resource and response time constraints. Existing research on this problem often ignores the dependencies between components and considers placement of a homogenous type of component only. A precise problem formulation of composite SaaS placement problem is presented. A classical genetic algorithm and two versions of cooperative co-evolutionary algorithms are designed to now manage the placement of heterogeneous types of SaaS components together with their dependencies, requirements and constraints. Experimental results demonstrate the efficiency and scalability of these new algorithms. In the second problem, SaaS components are assumed to be already running on Cloud virtual machines (VMs). However, due to the environment of a Cloud, the current placement may need to be modified. Existing techniques focused mostly at the infrastructure level instead of the application level. This research addressed the problem at the application level by clustering suitable components to VMs to optimise the resource used and to maintain the SaaS performance. Two versions of grouping genetic algorithms (GGAs) are designed to cater for the structural group of a composite SaaS. The first GGA used a repair-based method while the second used a penalty-based method to handle the problem constraints. The experimental results confirmed that the GGAs always produced a better reconfiguration placement plan compared with a common heuristic for clustering problems. The third research problem deals with the replication or deletion of SaaS instances in coping with the SaaS workload. To determine a scaling plan that can minimise the resource used and maintain the SaaS performance is a critical task. Additionally, the problem consists of constraints and interdependency between components, making solutions even more difficult to find. A hybrid genetic algorithm (HGA) was developed to solve this problem by exploring the problem search space through its genetic operators and fitness function to determine the SaaS scaling plan. The HGA also uses the problem's domain knowledge to ensure that the solutions meet the problem's constraints and achieve its objectives. The experimental results demonstrated that the HGA constantly outperform a heuristic algorithm by achieving a low-cost scaling and placement plan. This research has identified three significant new problems for composite SaaS in Cloud. Various types of evolutionary algorithms have also been developed in addressing the problems where these contribute to the evolutionary computation field. The algorithms provide solutions for efficient resource management of composite SaaS in Cloud that resulted to a low total cost of ownership for users while guaranteeing the SaaS performance.