913 resultados para platelet function tests
Resumo:
The paper deals with the approximate analysis of non-linear non-conservative systems oftwo degrees of freedom subjected to step-function excitation. The method of averaging of Krylov and Bogoliubov is used to arrive at the approximate equations for amplitude and phase. An example of a spring-mass-damper system is presented to illustrate the method and a comparison with numerical results brings out the validity of the approach.
Resumo:
CONTEXT: Conduit artery flow-mediated dilation (FMD) is a noninvasive index of preclinical atherosclerosis in humans. Exercise interventions can improve FMD in both healthy and clinical populations. OBJECTIVE: This systematic review and meta-analysis aimed to summarize the effect of exercise training on FMD in overweight and obese children and adolescents as well as investigate the role of cardiorespiratory fitness (peak oxygen consumption [Vo2peak]) on effects observed. DATA SOURCES: PubMed, Medline, Embase, and Cinahl databases were searched from the earliest available date to February 2015. STUDY SELECTION: Studies of children and/or adolescents who were overweight or obese were included. DATA EXTRACTION: Standardized data extraction forms were used for patient and intervention characteristics, control/comparator groups, and key outcomes. Procedural quality of the studies was assessed using a modified version of the Physiotherapy Evidence Base Database scale. RESULTS: A meta-analysis involving 219 participants compared the mean difference of pre- versus postintervention vascular function (FMD) and Vo2peak between an exercise training intervention and a control condition. There was a significantly greater improvement in FMD (mean difference 1.54%, P < .05) and Vo2peak (mean difference 3.64 mL/kg/min, P < .05) after exercise training compared with controls. LIMITATIONS: Given the diversity of exercise prescriptions, participant characteristics, and FMD measurement protocols, varying FMD effect size was noted between trials. CONCLUSIONS: Exercise training improves vascular function in overweight and obese children, as indicated by enhanced FMD. Further research is required to establish the optimum exercise program for maintenance of healthy vascular function in this at-risk pediatric population.
Resumo:
ALICE (A Large Ion Collider Experiment) is an experiment at CERN (European Organization for Nuclear Research), where a heavy-ion detector is dedicated to exploit the unique physics potential of nucleus-nucleus interactions at LHC (Large Hadron Collider) energies. In a part of that project, 716 so-called type V4 modules were assembles in Detector Laboratory of Helsinki Institute of Physics during the years 2004 - 2006. Altogether over a million detector strips has made this project the most massive particle detector project in the science history of Finland. One ALICE SSD module consists of a double-sided silicon sensor, two hybrids containing 12 HAL25 front end readout chips and some passive components, such has resistors and capacitors. The components are connected together by TAB (Tape Automated Bonding) microcables. The components of the modules were tested in every assembly phase with comparable electrical tests to ensure the reliable functioning of the detectors and to plot the possible problems. The components were accepted or rejected by the limits confirmed by ALICE collaboration. This study is concentrating on the test results of framed chips, hybrids and modules. The total yield of the framed chips is 90.8%, hybrids 96.1% and modules 86.2%. The individual test results have been investigated in the light of the known error sources that appeared during the project. After solving the problems appearing during the learning-curve of the project, the material problems, such as defected chip cables and sensors, seemed to induce the most of the assembly rejections. The problems were typically seen in tests as too many individual channel failures. Instead, the bonding failures rarely caused the rejections of any component. One sensor type among three different sensor manufacturers has proven to have lower quality than the others. The sensors of this manufacturer are very noisy and their depletion voltage are usually outside of the specification given to the manufacturers. Reaching 95% assembling yield during the module production demonstrates that the assembly process has been highly successful.
Resumo:
The current state of the practice in Blackspot Identification (BSI) utilizes safety performance functions based on total crash counts to identify transport system sites with potentially high crash risk. This paper postulates that total crash count variation over a transport network is a result of multiple distinct crash generating processes including geometric characteristics of the road, spatial features of the surrounding environment, and driver behaviour factors. However, these multiple sources are ignored in current modelling methodologies in both trying to explain or predict crash frequencies across sites. Instead, current practice employs models that imply that a single underlying crash generating process exists. The model mis-specification may lead to correlating crashes with the incorrect sources of contributing factors (e.g. concluding a crash is predominately caused by a geometric feature when it is a behavioural issue), which may ultimately lead to inefficient use of public funds and misidentification of true blackspots. This study aims to propose a latent class model consistent with a multiple crash process theory, and to investigate the influence this model has on correctly identifying crash blackspots. We first present the theoretical and corresponding methodological approach in which a Bayesian Latent Class (BLC) model is estimated assuming that crashes arise from two distinct risk generating processes including engineering and unobserved spatial factors. The Bayesian model is used to incorporate prior information about the contribution of each underlying process to the total crash count. The methodology is applied to the state-controlled roads in Queensland, Australia and the results are compared to an Empirical Bayesian Negative Binomial (EB-NB) model. A comparison of goodness of fit measures illustrates significantly improved performance of the proposed model compared to the NB model. The detection of blackspots was also improved when compared to the EB-NB model. In addition, modelling crashes as the result of two fundamentally separate underlying processes reveals more detailed information about unobserved crash causes.
Resumo:
Objectives Hematoma quality (especially the fibrin matrix) plays an important role in the bone healing process. Here, we investigated the effect of interleukin-1 beta (IL-1β) on fibrin clot formation from platelet-poor plasma (PPP). Methods Five-milliliter of rat whole-blood samples were collected from the hepatic portal vein. All blood samples were firstly standardized via a thrombelastograph (TEG), blood cell count, and the measurement of fibrinogen concentration. PPP was prepared by collecting the top two-fifths of the plasma after centrifugation under 400 × g for 10min at 20°C. The effects of IL-1β cytokines on artificial fibrin clot formation from PPP solutions were determined by scanning electronic microscopy (SEM), confocal microscopy (CM), turbidity, and clot lysis assays. Results The lag time for protofibril formation was markedly shortened in the IL-1β treatment groups (243.8 ± 76.85 in the 50 pg/mL of IL-1β and 97.5 ± 19.36 in the 500 pg/mL of IL-1β) compared to the control group without IL-1β (543.8 ± 205.8). Maximal turbidity was observed in the control group. IL-1β (500 pg/mL) treatment significantly decreased fiber diameters resulting in smaller pore sizes and increased density of the fibrin clot structure formed from PPP (P < 0.05). The clot lysis assay revealed that 500 pg/mL IL-1β induced a lower susceptibility to dissolution due to the formation of thinner and denser fibers. Conclusion IL-1β can significantly influence PPP fibrin clot structure, which may affect the early bone healing process.
Resumo:
The anti-thrombotic properties of an anthocyanin-rich Queen Garnet plum juice (QGPJ) and anthocyanin-free prune juice (PJ) were studied in this randomised, double-blind, crossover trial. Twenty-one healthy subjects (M = 10, F = 11) consumed QGPJ, PJ or placebo, 200 mL/day for 28-days followed by a 2-week wash-out period. Only QGPJ supplementation inhibited platelet aggregation induced by ADP (<5%, P = 0.02), collagen (<2.7%, P < 0.001) and arachidonic acid (<4%, P < 0.001); reduced platelet activation-dependent surface-marker P-selectin expression of activated de-granulated platelets (<17.2%, P = 0.04); prolonged activated-partial thromboplastin clotting time (>2.1 s, P = 0.03); reduced plasma-fibrinogen (<7.5%, P = 0.02) and malondialdehyde levels, a plasma biomarker of oxidative stress ( P = 0.016). PJ supplementation increased plasma hippuric acid content ( P = 0.018). QGPJ or PJ supplementation did not affect blood cell counts, lipid profile, or inflammation markers. Our findings suggest that QGPJ but not PJ has the potential to significantly attenuate thrombosis by reducing platelet activation/hyper-coagulability and oxidative stress.
Resumo:
The goal of this research is to understand the function of allelic variation of genes underpinning the stay-green drought adaptation trait in sorghum in order to enhance yield in water-limited environments. Stay-green, a delayed leaf senescence phenotype in sorghum, is primarily an emergent consequence of the improved balance between the supply and demand of water. Positional and functional fine-mapping of candidate genes associated with stay-green in sorghum is the focus of an international research partnership between Australian (UQ/DAFFQ) and US (Texas A&M University) scientists. Stay-green was initially mapped to four chromosomal regions (Stg1, Stg2, Stg3, and Stg4) by a number of research groups in the US and Australia. Physiological dissection of near-isolines containing single introgressions of Stg QTL (Stg1-4) indicate that these QTL reduce water demand before flowering by constricting the size of the canopy, thereby increasing water availability during grain filling and, ultimately, grain yield. Stg and root angle QTL are also co-located and, together with crop water use data, suggest the role of roots in the stay-green phenomenon. Candidate genes have been identified in Stg1-4, including genes from the PIN family of auxin efflux carriers in Stg1 and Stg2, with 10 of 11 PIN genes in sorghum co-locating with Stg QTL. Modified gene expression in some of these PIN candidates in the stay-green compared with the senescent types has been found in preliminary RNA expression profiling studies. Further proof-of-function studies are underway, including comparative genomics, SNP analysis to assess diversity at candidate genes, reverse genetics and transformation.
Resumo:
A composition operator is a linear operator between spaces of analytic or harmonic functions on the unit disk, which precomposes a function with a fixed self-map of the disk. A fundamental problem is to relate properties of a composition operator to the function-theoretic properties of the self-map. During the recent decades these operators have been very actively studied in connection with various function spaces. The study of composition operators lies in the intersection of two central fields of mathematical analysis; function theory and operator theory. This thesis consists of four research articles and an overview. In the first three articles the weak compactness of composition operators is studied on certain vector-valued function spaces. A vector-valued function takes its values in some complex Banach space. In the first and third article sufficient conditions are given for a composition operator to be weakly compact on different versions of vector-valued BMOA spaces. In the second article characterizations are given for the weak compactness of a composition operator on harmonic Hardy spaces and spaces of Cauchy transforms, provided the functions take values in a reflexive Banach space. Composition operators are also considered on certain weak versions of the above function spaces. In addition, the relationship of different vector-valued function spaces is analyzed. In the fourth article weighted composition operators are studied on the scalar-valued BMOA space and its subspace VMOA. A weighted composition operator is obtained by first applying a composition operator and then a pointwise multiplier. A complete characterization is given for the boundedness and compactness of a weighted composition operator on BMOA and VMOA. Moreover, the essential norm of a weighted composition operator on VMOA is estimated. These results generalize many previously known results about composition operators and pointwise multipliers on these spaces.
Resumo:
Fisheries management agencies around the world collect age data for the purpose of assessing the status of natural resources in their jurisdiction. Estimates of mortality rates represent a key information to assess the sustainability of fish stocks exploitation. Contrary to medical research or manufacturing where survival analysis is routinely applied to estimate failure rates, survival analysis has seldom been applied in fisheries stock assessment despite similar purposes between these fields of applied statistics. In this paper, we developed hazard functions to model the dynamic of an exploited fish population. These functions were used to estimate all parameters necessary for stock assessment (including natural and fishing mortality rates as well as gear selectivity) by maximum likelihood using age data from a sample of catch. This novel application of survival analysis to fisheries stock assessment was tested by Monte Carlo simulations to assert that it provided unbiased estimations of relevant quantities. The method was applied to the data from the Queensland (Australia) sea mullet (Mugil cephalus) commercial fishery collected between 2007 and 2014. It provided, for the first time, an estimate of natural mortality affecting this stock: 0.22±0.08 year −1 .
Resumo:
Tools known as maximal functions are frequently used in harmonic analysis when studying local behaviour of functions. Typically they measure the suprema of local averages of non-negative functions. It is essential that the size (more precisely, the L^p-norm) of the maximal function is comparable to the size of the original function. When dealing with families of operators between Banach spaces we are often forced to replace the uniform bound with the larger R-bound. Hence such a replacement is also needed in the maximal function for functions taking values in spaces of operators. More specifically, the suprema of norms of local averages (i.e. their uniform bound in the operator norm) has to be replaced by their R-bound. This procedure gives us the Rademacher maximal function, which was introduced by Hytönen, McIntosh and Portal in order to prove a certain vector-valued Carleson's embedding theorem. They noticed that the sizes of an operator-valued function and its Rademacher maximal function are comparable for many common range spaces, but not for all. Certain requirements on the type and cotype of the spaces involved are necessary for this comparability, henceforth referred to as the “RMF-property”. It was shown, that other objects and parameters appearing in the definition, such as the domain of functions and the exponent p of the norm, make no difference to this. After a short introduction to randomized norms and geometry in Banach spaces we study the Rademacher maximal function on Euclidean spaces. The requirements on the type and cotype are considered, providing examples of spaces without RMF. L^p-spaces are shown to have RMF not only for p greater or equal to 2 (when it is trivial) but also for 1 < p < 2. A dyadic version of Carleson's embedding theorem is proven for scalar- and operator-valued functions. As the analysis with dyadic cubes can be generalized to filtrations on sigma-finite measure spaces, we consider the Rademacher maximal function in this case as well. It turns out that the RMF-property is independent of the filtration and the underlying measure space and that it is enough to consider very simple ones known as Haar filtrations. Scalar- and operator-valued analogues of Carleson's embedding theorem are also provided. With the RMF-property proven independent of the underlying measure space, we can use probabilistic notions and formulate it for martingales. Following a similar result for UMD-spaces, a weak type inequality is shown to be (necessary and) sufficient for the RMF-property. The RMF-property is also studied using concave functions giving yet another proof of its independence from various parameters.
Resumo:
Introduction Schizophrenia is a severe mental disorder with multiple psychopathological domains being affected. Several lines of evidence indicate that cognitive impairment serves as the key component of schizophrenia psychopathology. Although there have been a multitude of cognitive studies in schizophrenia, there are many conflicting results. We reasoned that this could be due to individual differences among the patients (i.e. variation in the severity of positive vs. negative symptoms), different task designs, and/or the administration of different antipsychotics. Methods We thus review existing data concentrating on these dimensions, specifically in relation to dopamine function. We focus on most commonly used cognitive domains: learning, working memory, and attention. Results We found that the type of cognitive domain under investigation, medication state and type, and severity of positive and negative symptoms can explain the conflicting results in the literature. Conclusions This review points to future studies investigating individual differences among schizophrenia patients in order to reveal the exact relationship between cognitive function, clinical features, and antipsychotic treatment.
Resumo:
As accountants, we are all familiar with the SUM function, which calculates the sum in a range of numbers. However, there are instances where we might want to sum numbers in a given range based on a specified criteria. In this instance the SUM IF function can achieve this objective.
Resumo:
The recent trend towards minimizing the interconnections in large scale integration (LSI) circuits has led to intensive investigation in the development of ternary circuits and the improvement of their design. The ternary multiplexer is a convenient and useful logic module which can be used as a basic building block in the design of a ternary system. This paper discusses a systematic procedure for the simplification and realization of ternary functions using ternary multiplexers as building blocks. Both single level and multilevel multiplexing techniques are considered. The importance of the design procedure is highlighted by considering two specific applications, namely, the development of ternary adder/subtractor and TCD to ternary converter.
Resumo:
It is shown that at most, n + 3 tests are required to detect any single stuck-at fault in an AND gate or a single faulty EXCLUSIVE OR (EOR) gate in a Reed-Muller canonical form realization of a switching function.