624 resultados para logs
Resumo:
Tennis played at an elite level requires intensive training characterized by repeated bouts of brief intermittent high intensity exercise over relatively long periods of time (1 - 3 h or more). Competition can place additional stress on players. The purpose of this study was to investigate the temporal association between specific components of tennis training and competition, the incidence of upper respiratory tract infections (URT1), and salivary IgA, in a cohort of seventeen elite female tennis players. Timed, whole unstimulated saliva samples were collected before and after selected 1-h training sessions at 2 weekly intervals, over 12 weeks. Salivary IgA concentration was measured by ELISA and IgA secretion rate calculated (mug IgA x ml(-1) x ml saliva x min(-1)). Players reported URTI symptoms and recorded training and competition in daily logs. Data analysis showed that higher incidence of URTI was significantly associated with increased training duration and load, and competition level, on a weekly basis. Salivary IgA secretion rate (S-IgA) dropped significantly after 1 hour of tennis play. Over the 12-week period, pre-exercise salivary IgA concentration and secretion rate were directly associated with the amount of training undertaken during the previous day and week (p < 0.05). However, the decline in S-IgA after 1 h of intense tennis play was also positively related to the duration and load of training undertaken during the previous day and week (p < 0.05). Although exercise-induced suppression of salivary IgA may be a risk factor, it could not accurately predict the occurrence of URTI in this cohort of athletes.
Resumo:
In species with low levels of dispersal the chance of closely related individuals breeding may be a potential problem; sex-biased dispersal is a mechanism that may decrease the possibility of cosanguineous mating. Fragmentation of the habitat in which a species lives may affect mechanisms such as sex-biased dispersal, which may in turn exacerbate more direct effects of fragmentation such as decreasing population size that may lead to inbreeding depression. Relatedness statistics calculated using microsatellite DNA data showed that rainforest fragmentation has had an effect on the patterns of dispersal in the prickly forest skink (Gnypetoscincus queenslandiae), a rainforest endemic of the Wet Tropics of north eastern Australia. A lower level of relatedness was found in fragments compared to continuous forest sites due to a significantly lower level of pairwise relatedness between males in rainforest fragments. The pattern of genetic relatedness between sexes indicates the presence of male-biased dispersal in this species, with a stronger pattern detected in populations in rainforest fragments. Male prickly forest skinks may have to move further in fragmented habitat in order to find mates or suitable habitat logs.
Resumo:
We have made AMS measurements on a series of 10-ring samples from a subfossil Huon pine log found in western Tasmania (42degreesS, 145degreesE). The results show a pronounced rise in Delta(14)C over the first 200 years, and a decrease over the following 160 years. Tree-ring width measurements indicate that this log (catalogue SRT-447) can be cross-dated with another subfossil log (SRT-416) for which a series of high-precision radiometric C-14 measurements have previously been made. When the two tree-ring series are thus aligned, SRT-447 is the older of the two logs, and there is a 139-year overlap. We then have a Huon pine floating chronology spanning 680 years, with C-14 measurements attached. The C-14 data sets agree well within the period of overlap indicated by the tree-rings. The C-14 variations from Huon pine show excellent agreement with those from German oak and pine for the period 10,350-9670 cal BP. Aligning the Huon pine C-14 Series with that from German oak and pine allows us to examine the inter-hemispheric offset in C-14 dates in the early Holocene. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
The Cervarola Sandstones Formation (CSF), Aquitanian-Burdigalian in age, was deposited in an elongate, NW-stretched foredeep basin formed in front of the growing Northern Apennines orogenic wedge. The stratigraphic succession of the CSF, in the same way of other Apennine foredeep deposits, records the progressive closure of the basin due to the propagation of thrust fronts toward north-east, i.e. toward the outer and shallower foreland ramp. This process produce a complex foredeep characterized by synsedimentary structural highs and depocenters that can strongly influence the lateral and vertical turbidite facies distribution. Of consequence the main aim of this work is to describe and discuss this influence on the basis of a new high-resolution stratigraphic framework performed by measuring ten stratigraphic logs, for a total thickness of about 2000m, between the Secchia and Scoltenna Valleys (30km apart). In particular, the relationship between the turbidite sedimentation and the ongoing tectonic activity during the foredeep evolution has been describe through various stratigraphic cross sections oriented parallel and perpendicular to the main tectonic structures. On the basis of the high resolution physical stratigraphy of the studied succession, we propose a facies tract and an evolutionary model for the Cervarola Sandstones in the studied area. Thanks to these results and the analogies with others foredeep deposits of the northern Apennines, such as the Marnoso-arenacea Formation, the Cervarola basin has been interpreted as a highly confined foredeep controlled by an intense synsedimentary tectonic activity. The most important evidences supporting this hypothesis are: 1) the upward increase, in the studied stratigraphic succession (about 1000m thick), of sandstone/mudstone ratio, grain sizes and Ophiomorpha-type trace fossils testifying the high degree of flow deceleration related to the progressive closure and uplift of the foredeep. 2) the occurrence in the upper part of the stratigraphic succession of coarse-grained massive sandstones overlain by tractive structures such as megaripples and traction carpets passing downcurrent into fine-grained laminated contained-reflected beds. This facies tract is interpreted as related to deceleration and decoupling of bipartite flows with the deposition of the basal dense flows and bypass of the upper turbulent flows. 3) the widespread occurrence of contained reflected beds related to morphological obstacles created by tectonic structures parallel and perpendicular to the basin axis (see for example the Pievepelago line). 4) occurrence of intra-formational slumps, constituted by highly deformed portion of fine-grained succession, indicating a syn-sedimentary tectonic activity of the tectonic structures able to destabilize the margins of the basin. These types of deposits increase towards the upper part of the stratigraphic succession (see points 1 and 2) 5) the impressive lateral facies changes between intrabasinal topographic highs characterized by fine-grained and thin sandstone beds and marlstones and depocenters characterized by thick to very thick coarse-grained massive sandstones. 6) the common occurrence of amalgamation surfaces, flow impact structures and mud-draped scours related to sudden deceleration of the turbidite flows induced by the structurally-controlled confinement and morphological irregularities. In conclusion, the CSF has many analogies with the facies associations occurring in other tectonically-controlled foredeeps such as those of Marnoso-arenacea Formation (northern Italy) and Annot Sandstones (southern France) showing how thrust fronts and transversal structures moving towards the foreland, were able to produce a segmented foredeep that can strongly influence the turbidity current deposition.
Resumo:
This industrial based research project was undertaken for British Leyland and arose as a result of poor system efficiency on the Maxi and Marina vehicle body build lines. The major factors in the deterioration of system efficiency were identified as: a) The introduction of a 'Gateline' system of vehicle body build. b) The degeneration of a newly introduced measured daywork payment scheme. By relating the conclusions of past work on payment systems to the situation at Cowley, it was concluded that a combination of poor industrial relations and a lack of managerial control had caused the measured daywork scheme to degenerate into a straightforward payment for time at work. This ellminated the monetary incentive to achieve schedule with the consequence that both inefficiency and operating costs increased. To analyse further the cause of inefficiency, a study of Marina gateline stoppage logs was carried out. This revealed that poor system efficiency on the gateline was caused more by the nature of its design than poor reliability on individual items of' plant. The consideration given to system efficiency at the design stage was found to be negligible, the main obstacles being: a) A lack of understanding pertaining to the influence of certain design factors on the efficiency of a production line. b) The absence of data and techniques to predict system efficiency at the design stage. To remedy this situation, a computer simulation study of' the design factors was carried out from which relationships with system efficiency were established and empirical efficiency equations developed. Sets of tables were compiled from the equations and efficiency data relevant to vehicle body building established from the gateline stoppage logs. Computer simulation, the equations and the tables,when used in conjunction. with good efficiency data, are shown to be accurate methods of predicting production line system.efficiency.
Resumo:
The aim of this research is to investigate how risk management in a healthcare organisation can be supported by knowledge management. The subject of research is the development and management of existing logs called "risk registers", through specific risk management processes employed in a N.H.S. (Foundation) Trust in England, in the U.K. Existing literature on organisational risk management stresses the importance of knowledge for the effective implementation of risk management programmes, claiming that knowledge used to perceive risk is biased by the beliefs of individuals and groups involved in risk management and therefore is considered incomplete. Further, literature on organisational knowledge management presents several definitions and categorisations of knowledge and approaches for knowledge manipulation in the organisational context as a whole. However, there is no specific approach regarding "how to deal" with knowledge in the course of organisational risk management. The research is based on a single case study, on a N.H.S. (Foundation) Trust, is influenced by principles of interpretivism and the frame of mind of Soft Systems Methodology (S.S.M.) to investigate the management of risk registers, from the viewpoint of people involved in the situation. Data revealed that knowledge about risks and about the existing risk management policy and procedures is situated in several locations in the Trust and is neither consolidated nor present where and when required. This study proposes a framework that identifies required knowledge for each of the risk management processes and outlines methods for conversion of this knowledge, based on the SECI knowledge conversion model, and activities to facilitate knowledge conversion so that knowledge is effectively used for the development of risk registers and the monitoring of risks throughout the whole Trust under study. This study has theoretical impact in the management science literature as it addresses the issue of incomplete knowledge raised in the risk management literature using concepts of the knowledge management literature, such as the knowledge conversion model. In essence, the combination of required risk and risk management related knowledge with the required type of communication for risk management creates the proposed methods for the support of each risk management process for the risk registers. Further, the indication of the importance of knowledge in risk management and the presentation of a framework that consolidates knowledge required for the risk management processes and proposes way(s) for the communication of this knowledge within a healthcare organisation have practical impact in the management of healthcare organisations.
Resumo:
Most empirical work in economic growth assumes either a Cobb–Douglas production function expressed in logs or a log-approximated constant elasticity of substitution specification. Estimates from each are likely biased due to logging the model and the latter can also suffer from approximation bias. We illustrate this with a successful replication of Masanjala and Papagerogiou (The Solow model with CES technology: nonlinearities and parameter heterogeneity, Journal of Applied Econometrics 2004; 19: 171–201) and then estimate both models in levels to avoid these biases. Our estimation in levels gives results in line with conventional wisdom.
Resumo:
Part of network management is collecting information about the activities that go on around a distributed system and analyzing it in real time, at a deferred moment, or both. The reason such information may be stored in log files and analyzed later is to data-mine it so that interesting, unusual, or abnormal patterns can be discovered. In this paper we propose defining patterns in network activity logs using a dialect of First Order Temporal Logics (FOTL), called First Order Temporal Logic with Duration Constrains (FOTLDC). This logic is powerful enough to describe most network activity patterns because it can handle both causal and temporal correlations. Existing results for data-mining patterns with similar structure give us the confidence that discovering DFOTL patterns in network activity logs can be done efficiently.
Resumo:
The article presents a new type of logs merging tool for multiple blade telecommunication systems based on the development of a new approach. The introduction of the new logs merging tool (the Log Merger) can help engineers to build a processes behavior timeline with a flexible system of information structuring used to assess the changes in the analyzed system. This logs merging system based on the experts experience and their analytical skills generates a knowledge base which could be advantageous in further decision-making expert system development. This paper proposes and discusses the design and implementation of the Log Merger, its architecture, multi-board analysis of capability and application areas. The paper also presents possible ways of further tool improvement e.g. - to extend its functionality and cover additional system platforms. The possibility to add an analysis module for further expert system development is also considered.
Resumo:
GitHub is the most popular repository for open source code (Finley 2011). It has more than 3.5 million users, as the company declared in April 2013, and more than 10 million repositories, as of December 2013. It has a publicly accessible API and, since March 2012, it also publishes a stream of all the events occurring on public projects. Interactions among GitHub users are of a complex nature and take place in different forms. Developers create and fork repositories, push code, approve code pushed by others, bookmark their favorite projects and follow other developers to keep track of their activities. In this paper we present a characterization of GitHub, as both a social network and a collaborative platform. To the best of our knowledge, this is the first quantitative study about the interactions happening on GitHub. We analyze the logs from the service over 18 months (between March 11, 2012 and September 11, 2013), describing 183.54 million events and we obtain information about 2.19 million users and 5.68 million repositories, both growing linearly in time. We show that the distributions of the number of contributors per project, watchers per project and followers per user show a power-law-like shape. We analyze social ties and repository-mediated collaboration patterns, and we observe a remarkably low level of reciprocity of the social connections. We also measure the activity of each user in terms of authored events and we observe that very active users do not necessarily have a large number of followers. Finally, we provide a geographic characterization of the centers of activity and we investigate how distance influences collaboration.
Resumo:
All pathogens require high energetic influxes to counterattack the host immune system and without this energy bacterial infections are easily cleared. This study is an investigation into one highly bioenergetic pathway in Pseudomonas aeruginosa involving the amino acid L-serine and the enzyme L-serine deaminase (L-SD). P. aeruginosa is an opportunistic pathogen causing infections in patients with compromised immune systems as well as patients with cystic fibrosis. Recent evidence has linked L-SD directly to the pathogenicity of several organisms including but not limited to Campylobacter jejuni, Mycobacterium bovis, Streptococcus pyogenes, and Yersinia pestis. We hypothesized that P. aeruginosa L-SD is likely to be critical for its virulence. Genome sequence analysis revealed the presence of two L-SD homo logs encoded by sdaA and sdaB. We analyzed the ability of P. aeruginosa to utilize serine and the role of SdaA and SdaB in serine deamination by comparing mutant strains of sdaA (PAOsdaA) and sdaB (PAOsdaB) with their isogenic parent P. aeruginosa P AO 1. We demonstrated that P. aeruginosa is unable to use serine as a sole carbon source. However, serine utilization is enhanced in the presence of glycine and this glycine-dependent induction of L-SD activity requires the inducer serine. The amino acid leucine was shown to inhibit L-SD activity from both SdaA and SdaB and the net contribution to L-serine deamination by SdaA and SdaB was ascertained at 34% and 66 %, respectively. These results suggest that P. aeruginosa LSD is quite different from the characterized E. coli L-SD that is glycine-independent but leucine-dependent for activation. Growth mutants able to use serine as a sole carbon source were also isolated and in addition, suicide vectors were constructed which allow for selective mutation of the sdaA and sdaB genes on any P. aeruginosa strain of interest. Future studies with a double mutant will reveal the importance of these genes for pathogenicity.
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.
Resumo:
Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. ^ In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.^
Resumo:
Many systems and applications are continuously producing events. These events are used to record the status of the system and trace the behaviors of the systems. By examining these events, system administrators can check the potential problems of these systems. If the temporal dynamics of the systems are further investigated, the underlying patterns can be discovered. The uncovered knowledge can be leveraged to predict the future system behaviors or to mitigate the potential risks of the systems. Moreover, the system administrators can utilize the temporal patterns to set up event management rules to make the system more intelligent. With the popularity of data mining techniques in recent years, these events grad- ually become more and more useful. Despite the recent advances of the data mining techniques, the application to system event mining is still in a rudimentary stage. Most of works are still focusing on episodes mining or frequent pattern discovering. These methods are unable to provide a brief yet comprehensible summary to reveal the valuable information from the high level perspective. Moreover, these methods provide little actionable knowledge to help the system administrators to better man- age the systems. To better make use of the recorded events, more practical techniques are required. From the perspective of data mining, three correlated directions are considered to be helpful for system management: (1) Provide concise yet comprehensive summaries about the running status of the systems; (2) Make the systems more intelligence and autonomous; (3) Effectively detect the abnormal behaviors of the systems. Due to the richness of the event logs, all these directions can be solved in the data-driven manner. And in this way, the robustness of the systems can be enhanced and the goal of autonomous management can be approached. This dissertation mainly focuses on the foregoing directions that leverage tem- poral mining techniques to facilitate system management. More specifically, three concrete topics will be discussed, including event, resource demand prediction, and streaming anomaly detection. Besides the theoretic contributions, the experimental evaluation will also be presented to demonstrate the effectiveness and efficacy of the corresponding solutions.
Resumo:
Compressional- and shear-wave velocity logs (Vp and Vs, respectively) that were run to a sub-basement depth of 1013 m (1287.5 m sub-bottom) in Hole 504B suggest the presence of Layer 2A and document the presence of layers 2B and 2C on the Costa Rica Rift. Layer 2A extends from the mudline to 225 m sub-basement and is characterized by compressional-wave velocities of 4.0 km/s or less. Layer 2B extends from 225 to 900 m and may be divided into two intervals: an upper level from 225 to 600 m in which Vp decreases slowly from 5.0 to 4.8 km/s and a lower level from 600 to about 900 m in which Vp increases slowly to 6.0 km/s. In Layer 2C, which was logged for about 100 m to a depth of 1 km, Vp and Vs appear to be constant at 6.0 and 3.2 km/s, respectively. This velocity structure is consistent with, but more detailed than the structure determined by the oblique seismic experiment in the same hole. Since laboratory measurements of the compressional- and shear-wave velocity of samples from Hole 504B at Pconfining = Pdifferential average 6.0 and 3.2 km/s respectively, and show only slight increases with depth, we conclude that the velocity structure of Layer 2 is controlled almost entirely by variations in porosity and that the crack porosity of Layer 2C approaches zero. A comparison between the compressional-wave velocities determined by logging and the formation porosities calculated from the results of the large-scale resistivity experiment using Archie's Law suggest that the velocity- porosity relation derived by Hyndman et al. (1984) for laboratory samples serves as an upper bound for Vp, and the noninteractive relation derived by Toksöz et al. (1976) for cracks with an aspect ratio a = 1/32 serves as a lower bound.