997 resultados para 12923-005
Resumo:
The aim of this study was to assess the appearance of cardiac troponins (cTnI and/or cTnT) after a short bout (30 s) of ‘all-out’ intense exercise and to determine the stability of any exercise-related cTnI release in response to repeated bouts of high intensity exercise separated by 7 days recovery. Eighteen apparently healthy, physically active, male university students completed two all-out 30 s cycle sprint, separated by 7 days. cTnI, blood lactate and catecholamine concentrations were measured before, immediately after and 24 h after each bout. Cycle performance, heart rate and blood pressure responses to exercise were also recorded. Cycle performance was modestly elevated in the second trial [6·5% increase in peak power output (PPO)]; there was no difference in the cardiovascular, lactate or catecholamine response to the two cycle trials. cTnI was not significantly elevated from baseline through recovery (Trial 1: 0·06 ± 0·04 ng ml−1, 0·05 ± 0·04 ng ml−1, 0·03 ± 0·02 ng ml−1; Trial 2: 0·02 ± 0·04 ng ml−1, 0·04 ± 0·03 ng ml−1, 0·05 ± 0·06 ng ml−1) in either trial. Very small within subject changes were not significantly correlated between the two trials (r = 0·06; P>0·05). Subsequently, short duration, high intensity exercise does not elicit a clinically relevant response in cTnI and any small alterations likely reflect the underlying biological variability of cTnI measurement within the participants.
Resumo:
Wood, Ian; Geissert, M.; Heck, H.; Hieber, M., (2005) 'The Ornstein-Uhlenbeck semigroup in exterior domains', Archiv der Mathematik 86 pp.554-562 RAE2008
Resumo:
Warren, J. and James, P. (2006). The ecological effects of exotic disease resistance genes introgressed into British gooseberries. Oecologia 147(1),69-75. RAE2008
Resumo:
P.M. Hastie and W. Haresign (2006). A role for LH in the regulation of expression of mRNAs encoding components of the insulin-like growth factor (IGF) system in the ovine corpus luteum. Animal Reproduction Science, 96(1-2), 196-209. Sponsorship: DEFRA RAE2008
Resumo:
The field of redox biology is inherently intertwined with oxidative stress biomarkers. Oxidative stress biomarkers have been utilized for many different objectives. Our analysis indicates that oxidative stress biomarkers have several salient applications: (1) diagnosing oxidative stress, (2) pinpointing likely redox components in a physiological or pathological process, and (3) estimating the severity, progression and/or regression of a disease. On the contrary, oxidative stress biomarkers do not report on redox signaling. Alternative approaches to gain more mechanistic insights are: (1) measuring molecules that are integrated in pathways linking redox biochemistry with physiology, (2) using the exomarker approach and (3) exploiting -omics techniques. More sophisticated approaches and large trials are needed to establish oxidative stress biomarkers in the clinical setting.
Resumo:
Monografia apresentada à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Licenciada em Medicina Dentária
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Ciências Farmacêuticas
Resumo:
Projeto de Pós-Graduação/Dissertação apresentado à Universidade Fernando Pessoa como parte dos requisitos para obtenção do grau de Mestre em Medicina Dentária
Resumo:
65 hojas : ilustraciones, fotografías a color.
Resumo:
35 fotografías a color.
Resumo:
The proliferation of inexpensive workstations and networks has prompted several researchers to use such distributed systems for parallel computing. Attempts have been made to offer a shared-memory programming model on such distributed memory computers. Most systems provide a shared-memory that is coherent in that all processes that use it agree on the order of all memory events. This dissertation explores the possibility of a significant improvement in the performance of some applications when they use non-coherent memory. First, a new formal model to describe existing non-coherent memories is developed. I use this model to prove that certain problems can be solved using asynchronous iterative algorithms on shared-memory in which the coherence constraints are substantially relaxed. In the course of the development of the model I discovered a new type of non-coherent behavior called Local Consistency. Second, a programming model, Mermera, is proposed. It provides programmers with a choice of hierarchically related non-coherent behaviors along with one coherent behavior. Thus, one can trade-off the ease of programming with coherent memory for improved performance with non-coherent memory. As an example, I present a program to solve a linear system of equations using an asynchronous iterative algorithm. This program uses all the behaviors offered by Mermera. Third, I describe the implementation of Mermera on a BBN Butterfly TC2000 and on a network of workstations. The performance of a version of the equation solving program that uses all the behaviors of Mermera is compared with that of a version that uses coherent behavior only. For a system of 1000 equations the former exhibits at least a 5-fold improvement in convergence time over the latter. The version using coherent behavior only does not benefit from employing more than one workstation to solve the problem while the program using non-coherent behavior continues to achieve improved performance as the number of workstations is increased from 1 to 6. This measurement corroborates our belief that non-coherent shared memory can be a performance boon for some applications.
Resumo:
By utilizing structure sharing among its parse trees, a GB parser can increase its efficiency dramatically. Using a GB parser which has as its phrase structure recovery component an implementation of Tomita's algorithm (as described in [Tom86]), we investigate how a GB parser can preserve the structure sharing output by Tomita's algorithm. In this report, we discuss the implications of using Tomita's algorithm in GB parsing, and we give some details of the structuresharing parser currently under construction. We also discuss a method of parallelizing a GB parser, and relate it to the existing literature on parallel GB parsing. Our approach to preserving sharing within a shared-packed forest is applicable not only to GB parsing, but anytime we want to preserve structure sharing in a parse forest in the presence of features.
Resumo:
Recent studies have noted that vertex degree in the autonomous system (AS) graph exhibits a highly variable distribution [15, 22]. The most prominent explanatory model for this phenomenon is the Barabási-Albert (B-A) model [5, 2]. A central feature of the B-A model is preferential connectivity—meaning that the likelihood a new node in a growing graph will connect to an existing node is proportional to the existing node’s degree. In this paper we ask whether a more general explanation than the B-A model, and absent the assumption of preferential connectivity, is consistent with empirical data. We are motivated by two observations: first, AS degree and AS size are highly correlated [11]; and second, highly variable AS size can arise simply through exponential growth. We construct a model incorporating exponential growth in the size of the Internet, and in the number of ASes. We then show via analysis that such a model yields a size distribution exhibiting a power-law tail. In such a model, if an AS’s link formation is roughly proportional to its size, then AS degree will also show high variability. We instantiate such a model with empirically derived estimates of growth rates and show that the resulting degree distribution is in good agreement with that of real AS graphs.
Resumo:
In this paper, we expose an unorthodox adversarial attack that exploits the transients of a system's adaptive behavior, as opposed to its limited steady-state capacity. We show that a well orchestrated attack could introduce significant inefficiencies that could potentially deprive a network element from much of its capacity, or significantly reduce its service quality, while evading detection by consuming an unsuspicious, small fraction of that element's hijacked capacity. This type of attack stands in sharp contrast to traditional brute-force, sustained high-rate DoS attacks, as well as recently proposed attacks that exploit specific protocol settings such as TCP timeouts. We exemplify what we term as Reduction of Quality (RoQ) attacks by exposing the vulnerabilities of common adaptation mechanisms. We develop control-theoretic models and associated metrics to quantify these vulnerabilities. We present numerical and simulation results, which we validate with observations from real Internet experiments. Our findings motivate the need for the development of adaptation mechanisms that are resilient to these new forms of attacks.
Resumo:
A problem with Speculative Concurrency Control algorithms and other common concurrency control schemes using forward validation is that committing a transaction as soon as it finishes validating, may result in a value loss to the system. Haritsa showed that by making a lower priority transaction wait after it is validated, the number of transactions meeting their deadlines is increased, which may result in a higher value-added to the system. SCC-based protocols can benefit from the introduction of such delays by giving optimistic shadows with high value-added to the system more time to execute and commit instead of being aborted in favor of other validating transactions, whose value-added to the system is lower. In this paper we present and evaluate an extension to SCC algorithms that allows for commit deferments.