920 resultados para Count of platelets
Resumo:
Platelet-derived microparticles that are produced during platelet activation bind to traumatized endothelium. Such endothelial injury occurs during percutaneous transluminal coronary angioplasty. Approximately 20% of these patients subsequently develop restenosis, although this is improved by treatment with the anti-platelet glycoprotein IIb/IIIa receptor drug abciximab. As platelet activation occurs during angioplasty, it is likely that platelet-derived microparticles may be produced and hence contribute to restenosis. This study population consisted of 113 angioplasty patients, of whom 38 received abciximab. Paired peripheral arterial blood samples were obtained following heparinization and subsequent to all vessel manipulation. Platelet-derived microparticles were identified using an anti-CD61 (glycoprotein IIIa) fluorescence-conjugated antibody and flow cytometry. Baseline clinical characteristics between patient groups were similar. The level of platelet-derived microparticles increased significantly following angioplasty in the group without abciximab (paired t test, P 0.019). However, there was no significant change in the level of platelet-derived microparticles following angioplasty in patients who received abciximab, despite requiring more complex angioplasty procedures. In this study, we have demonstrated that the level of platelet-derived microparticles increased during percutaneous transluminal coronary angioplasty, with no such increase with abciximab treatment. The increased platelet-derived microparticles may adhere to traumatized endothelium, contributing to re-occlusion of the arteries, but this remains to be determined.
Resumo:
Closed WS2 nanoboxes were formed by topotactic sulfidization of a WO3/WO3 center dot 1/3H(2)O intergrowth precursor. Automated diffraction tomography was used to elucidate the growth mechanism of these unconventional hollow structures. By partial conversion and structural analysis of the products, each of them representing a snapshot of the reaction at a given point in time, the overall reaction can be broken down into a cascade of individual steps and each of them identified with a basic mechanism. During the initial step of sulfidization WO3 center dot 1/3H(2)O transforms into hexagonal WO3 whose surface allows for the epitaxial induction of WS2. The initially formed platelets of WS2 exhibit a preferred orientation with respect to the nanorod surface. In the final step individual layers of WS2 coalesce to form closed shells. In essence, a cascade of several topotactic reactions leads to epitactic induction and formation of closed rectangular hollow boxes made up from hexagonal layers.
Resumo:
In the commercial food industry, demonstration of microbiological safety and thermal process equivalence often involves a mathematical framework that assumes log-linear inactivation kinetics and invokes concepts of decimal reduction time (DT), z values, and accumulated lethality. However, many microbes, particularly spores, exhibit inactivation kinetics that are not log linear. This has led to alternative modeling approaches, such as the biphasic and Weibull models, that relax strong log-linear assumptions. Using a statistical framework, we developed a novel log-quadratic model, which approximates the biphasic and Weibull models and provides additional physiological interpretability. As a statistical linear model, the log-quadratic model is relatively simple to fit and straightforwardly provides confidence intervals for its fitted values. It allows a DT-like value to be derived, even from data that exhibit obvious "tailing." We also showed how existing models of non-log-linear microbial inactivation, such as the Weibull model, can fit into a statistical linear model framework that dramatically simplifies their solution. We applied the log-quadratic model to thermal inactivation data for the spore-forming bacterium Clostridium botulinum and evaluated its merits compared with those of popular previously described approaches. The log-quadratic model was used as the basis of a secondary model that can capture the dependence of microbial inactivation kinetics on temperature. This model, in turn, was linked to models of spore inactivation of Sapru et al. and Rodriguez et al. that posit different physiological states for spores within a population. We believe that the log-quadratic model provides a useful framework in which to test vitalistic and mechanistic hypotheses of inactivation by thermal and other processes. Copyright © 2009, American Society for Microbiology. All Rights Reserved.
Resumo:
A hippocampal-CA3 memory model was constructed with PGENESIS, a recently developed version of GENESIS that allows for distributed processing of a neural network simulation. A number of neural models of the human memory system have identified the CA3 region of the hippocampus as storing the declarative memory trace. However, computational models designed to assess the viability of the putative mechanisms of storage and retrieval have generally been too abstract to allow comparison with empirical data. Recent experimental evidence has shown that selective knock-out of NMDA receptors in the CA1 of mice leads to reduced stability of firing specificity in place cells. Here a similar reduction of stability of input specificity is demonstrated in a biologically plausible neural network model of the CA3 region, under conditions of Hebbian synaptic plasticity versus an absence of plasticity. The CA3 region is also commonly associated with seizure activity. Further simulations of the same model tested the response to continuously repeating versus randomized nonrepeating input patterns. Each paradigm delivered input of equal intensity and duration. Non-repeating input patterns elicited a greater pyramidal cell spike count. This suggests that repetitive versus non-repeating neocortical inpus has a quantitatively different effect on the hippocampus. This may be relevant to the production of independent epileptogenic zones and the process of encoding new memories.
Resumo:
Magnetic resonance is a well-established tool for structural characterisation of porous media. Features of pore-space morphology can be inferred from NMR diffusion-diffraction plots or the time-dependence of the apparent diffusion coefficient. Diffusion NMR signal attenuation can be computed from the restricted diffusion propagator, which describes the distribution of diffusing particles for a given starting position and diffusion time. We present two techniques for efficient evaluation of restricted diffusion propagators for use in NMR porous-media characterisation. The first is the Lattice Path Count (LPC). Its physical essence is that the restricted diffusion propagator connecting points A and B in time t is proportional to the number of distinct length-t paths from A to B. By using a discrete lattice, the number of such paths can be counted exactly. The second technique is the Markov transition matrix (MTM). The matrix represents the probabilities of jumps between every pair of lattice nodes within a single timestep. The propagator for an arbitrary diffusion time can be calculated as the appropriate matrix power. For periodic geometries, the transition matrix needs to be defined only for a single unit cell. This makes MTM ideally suited for periodic systems. Both LPC and MTM are closely related to existing computational techniques: LPC, to combinatorial techniques; and MTM, to the Fokker-Planck master equation. The relationship between LPC, MTM and other computational techniques is briefly discussed in the paper. Both LPC and MTM perform favourably compared to Monte Carlo sampling, yielding highly accurate and almost noiseless restricted diffusion propagators. Initial tests indicate that their computational performance is comparable to that of finite element methods. Both LPC and MTM can be applied to complicated pore-space geometries with no analytic solution. We discuss the new methods in the context of diffusion propagator calculation in porous materials and model biological tissues.
Resumo:
The gross under-resourcing of conservation endeavours has placed an increasing emphasis on spending accountability. Increased accountability has led to monitoring forming a central element of conservation programs. Although there is little doubt that information obtained from monitoring can improve management of biodiversity, the cost (in time and/or money) of gaining this knowledge is rarely considered when making decisions about allocation of resources to monitoring. We present a simple framework allowing managers and policy advisors to make decisions about when to invest in monitoring to improve management. © 2010 Elsevier Ltd.
Resumo:
In this paper we present a new method for performing Bayesian parameter inference and model choice for low count time series models with intractable likelihoods. The method involves incorporating an alive particle filter within a sequential Monte Carlo (SMC) algorithm to create a novel pseudo-marginal algorithm, which we refer to as alive SMC^2. The advantages of this approach over competing approaches is that it is naturally adaptive, it does not involve between-model proposals required in reversible jump Markov chain Monte Carlo and does not rely on potentially rough approximations. The algorithm is demonstrated on Markov process and integer autoregressive moving average models applied to real biological datasets of hospital-acquired pathogen incidence, animal health time series and the cumulative number of poison disease cases in mule deer.
Resumo:
Although the external influence of scholars has usually been approximated by publication and citation count, the array of scholarly activities is far more extensive. Today, new technologies, in particular Internet search engines, allow more accurate measurement of scholars' influence on societal discourse. Hence, in this article, we analyse the relation between the internal and external influence of 723 top economists using the number of pages indexed by Google and Bing as a measure of external influence. We not only identify a small association between these scholars’ internal and external influence but also a correlation between internal influence, as captured by receipt of such major academic awards as the Nobel Prize and John Bates Clark Medal, and the external prominence of the top 100 researchers (JEL Code: A11, A13, Z18).
Resumo:
The past five years have seen many scientific and biological discoveries made through the experimental design of genome-wide association studies (GWASs). These studies were aimed at detecting variants at genomic loci that are associated with complex traits in the population and, in particular, at detecting associations between common single-nucleotide polymorphisms (SNPs) and common diseases such as heart disease, diabetes, auto-immune diseases, and psychiatric disorders. We start by giving a number of quotes from scientists and journalists about perceived problems with GWASs. We will then briefly give the history of GWASs and focus on the discoveries made through this experimental design, what those discoveries tell us and do not tell us about the genetics and biology of complex traits, and what immediate utility has come out of these studies. Rather than giving an exhaustive review of all reported findings for all diseases and other complex traits, we focus on the results for auto-immune diseases and metabolic diseases. We return to the perceived failure or disappointment about GWASs in the concluding section. © 2012 The American Society of Human Genetics.
Resumo:
Although grass pollen is widely regarded as the major outdoor aeroallergen source in Australia and New Zealand (NZ), no assemblage of airborne pollen data for the region has been previously compiled. Grass pollen count data collected at 14 urban sites in Australia and NZ over periods ranging from 1 to 17 years were acquired, assembled and compared, revealing considerable spatiotemporal variability. Although direct comparison between these data is problematic due to methodological differences between monitoring sites, the following patterns are apparent. Grass pollen seasons tended to have more than one peak from tropics to latitudes of 37°S and single peaks at sites south of this latitude. A longer grass pollen season was therefore found at sites below 37°S, driven by later seasonal end dates for grass growth and flowering. Daily pollen counts increased with latitude; subtropical regions had seasons of both high intensity and long duration. At higher latitude sites, the single springtime grass pollen peak is potentially due to a cooler growing season and a predominance of pollen from C
Resumo:
Background: Gastroesophageal reflux disease (GORD) can cause respiratory disease in children from recurrent aspiration of gastric contents. GORD can be defined in several ways and one of the most common method is presence of reflux oesophagitis. In children with GORD and respiratory disease, airway neutrophilia has been described. However, there are no prospective studies that have examined airway cellularity in children with GORD but without respiratory disease. The aims of the study were to compare (1) BAL cellularity and lipid laden macrophage index (LLMI) and, (2) microbiology of BAL and gastric juices of children with GORD (G+) to those without (G-). Methods: In 150 children aged <14-years, gastric aspirates and bronchoscopic airway lavage (BAL) were obtained during elective flexible upper endoscopy. GORD was defined as presence of reflux oesophagitis on distal oesophageal biopsies. Results: BAL neutrophil% in G- group (n = 63) was marginally but significantly higher than that in the G+ group (n = 77), (median of 7.5 and 5 respectively, p = 0.002). Lipid laden macrophage index (LLMI), BAL percentages of lymphocyte, eosinophil and macrophage were similar between groups. Viral studies were negative in all, bacterial cultures positive in 20.7% of BALs and in 5.3% of gastric aspirates. BAL cultures did not reflect gastric aspirate cultures in all but one child. Conclusion: In children without respiratory disease, GORD defined by presence of reflux oesophagitis, is not associated with BAL cellular profile or LLMI abnormality. Abnormal microbiology of the airways, when present, is not related to reflux oesophagitis and does not reflect that of gastric juices. © 2005 Chang et al; licensee BioMed Central Ltd.
Resumo:
Background: Although lentiviral vectors have been widely used for in vitro and in vivo gene therapy researches, there have been few studies systematically examining various conditions that may affect the determination of the number of viable vector particles in a vector preparation and the use of Multiplicity of Infection (MOI) as a parameter for the prediction of gene transfer events. Methods: Lentiviral vectors encoding a marker gene were packaged and supernatants concentrated. The number of viable vector particles was determined by in vitro transduction and fluorescent microscopy and FACs analyses. Various factors that may affect the transduction process, such as vector inoculum volume, target cell number and type, vector decay, variable vector - target cell contact and adsorption periods were studied. MOI between 0-32 was assessed on commonly used cell lines as well as a new cell line. Results: We demonstrated that the resulting values of lentiviral vector titre varied with changes of conditions in the transduction process, including inoculum volume of the vector, the type and number of target cells, vector stability and the length of period of the vector adsorption to target cells. Vector inoculum and the number of target cells determine the frequencies of gene transfer event, although not proportionally. Vector exposure time to target cells also influenced transduction results. Varying these parameters resulted in a greater than 50-fold differences in the vector titre from the same vector stock. Commonly used cell lines in vector titration were less sensitive to lentiviral vector-mediated gene transfer than a new cell line, FRL 19. Within 0-32 of MOI used transducing four different cell lines, the higher the MOI applied, the higher the efficiency of gene transfer obtained. Conclusion: Several variables in the transduction process affected in in vitro vector titration and resulted in vastly different values from the same vector stock, thus complicating the use of MOI for predicting gene transfer events. Commonly used target cell lines underestimated vector titre. However, within a certain range of MOI, it is possible that, if strictly controlled conditions are observed in the vector titration process, including the use of a sensitive cell line, such as FRL 19 for vector titration, lentivector-mediated gene transfer events could be predicted. © 2004 Zhang et al; licensee BioMed Central Ltd.
Resumo:
Aerosol black carbon (BC) mass concentrations ([BC]), measured continuously during a multi-platform field experiment, Integrated Campaign for Aerosols gases and Radiation Budget (ICARB, March-May 2006), from a network of eight observatories spread over geographically distinct environments of India, (which included five mainland stations, one highland station, and two island stations (one each ill Arabian Sea and Bay of Bengal)) are examined for their spatio-temporal characteristics. During the period of study, [BC] showed large variations across the country, with values ranging from 27 mu g m(3) over industrial/urban locations to as low as 0.065 mu g m(-3) over the Arabian Sea. For all mainland stations, [BC] remained high compared to highland as well as island stations. Among the island stations, Port Blair (PBR) had higher concentration of BC, compared to Minicoy (MCY), implying more absorbing nature of Bay of Bengal aerosols than Arabian Sea. The highland station Nainital (NTL), in the central Himalayas, showed low values of [BC], comparable or even lower than that of the island station PBR, indicating the prevalence of cleaner environment over there. An examination of the changes in the mean temporal features, as the season advances from winter (December-February) to pre-monsoon (March-May), revealed that: (a) Diurnal variations were pronounced over all the mainland stations, with all afternoon low and a nighttime high: (b) At the islands, the diurnal variations, though resembled those over the mainlands, were less pronounced; and (c) In contrast to this, highland station showed an opposite pattern with an afternoon high and a late night or early morning low. The diurnal variations at all stations are mainly caused by the dynamics of local Atmospheric Boundary Layer (ABL), At the entire mainland as well as island stations (except HYD and DEL), [BC] showed a decreasing trend from January to May, This is attributed to the increased convective mixing and to the resulting enhanced vertical dispersal of species in the ABL. In addition, large short-period modulations were observed at DEL and HYD, which appeared to be episodic, An examination of this in the light of the MODIS-derived fire count data over India along with the back-trajectory analysis revealed that advection of BC from extensive forest fires and biomass-burning regions upwind were largely responsible for this episodic enhancement in BC at HYD and DEL.
Resumo:
The application of multilevel control strategies for load-frequency control of interconnected power systems is assuming importance. A large multiarea power system may be viewed as an interconnection of several lower-order subsystems, with possible change of interconnection pattern during operation. The solution of the control problem involves the design of a set of local optimal controllers for the individual areas, in a completely decentralised environment, plus a global controller to provide the corrective signal to account for interconnection effects. A global controller, based on the least-square-error principle suggested by Siljak and Sundareshan, has been applied for the LFC problem. A more recent work utilises certain possible beneficial aspects of interconnection to permit more desirable system performances. The paper reports the application of the latter strategy to LFC of a two-area power system. The power-system model studied includes the effects of excitation system and governor controls. A comparison of the two strategies is also made.
Resumo:
The 2008 US election has been heralded as the first presidential election of the social media era, but took place at a time when social media were still in a state of comparative infancy; so much so that the most important platform was not Facebook or Twitter, but the purpose-built campaign site my.barackobama.com, which became the central vehicle for the most successful electoral fundraising campaign in American history. By 2012, the social media landscape had changed: Facebook and, to a somewhat lesser extent, Twitter are now well-established as the leading social media platforms in the United States, and were used extensively by the campaign organisations of both candidates. As third-party spaces controlled by independent commercial entities, however, their use necessarily differs from that of home-grown, party-controlled sites: from the point of view of the platform itself, a @BarackObama or @MittRomney is technically no different from any other account, except for the very high follower count and an exceptional volume of @mentions. In spite of the significant social media experience which Democrat and Republican campaign strategists had already accumulated during the 2008 campaign, therefore, the translation of such experience to the use of Facebook and Twitter in their 2012 incarnations still required a substantial amount of new work, experimentation, and evaluation. This chapter examines the Twitter strategies of the leading accounts operated by both campaign headquarters: the ‘personal’ candidate accounts @BarackObama and @MittRomney as well as @JoeBiden and @PaulRyanVP, and the campaign accounts @Obama2012 and @TeamRomney. Drawing on datasets which capture all tweets from and at these accounts during the final months of the campaign (from early September 2012 to the immediate aftermath of the election night), we reconstruct the campaigns’ approaches to using Twitter for electioneering from the quantitative and qualitative patterns of their activities, and explore the resonance which these accounts have found with the wider Twitter userbase. A particular focus of our investigation in this context will be on the tweeting styles of these accounts: the mixture of original messages, @replies, and retweets, and the level and nature of engagement with everyday Twitter followers. We will examine whether the accounts chose to respond (by @replying) to the messages of support or criticism which were directed at them, whether they retweeted any such messages (and whether there was any preferential retweeting of influential or – alternatively – demonstratively ordinary users), and/or whether they were used mainly to broadcast and disseminate prepared campaign messages. Our analysis will highlight any significant differences between the accounts we examine, trace changes in style over the course of the final campaign months, and correlate such stylistic differences with the respective electoral positioning of the candidates. Further, we examine the use of these accounts during moments of heightened attention (such as the presidential and vice-presidential debates, or in the context of controversies such as that caused by the publication of the Romney “47%” video; additional case studies may emerge over the remainder of the campaign) to explore how they were used to present or defend key talking points, and exploit or avert damage from campaign gaffes. A complementary analysis of the messages directed at the campaign accounts (in the form of @replies or retweets) will also provide further evidence for the extent to which these talking points were picked up and disseminated by the wider Twitter population. Finally, we also explore the use of external materials (links to articles, images, videos, and other content on the campaign sites themselves, in the mainstream media, or on other platforms) by the campaign accounts, and the resonance which these materials had with the wider follower base of these accounts. This provides an indication of the integration of Twitter into the overall campaigning process, by highlighting how the platform was used as a means of encouraging the viral spread of campaign propaganda (such as advertising materials) or of directing user attention towards favourable media coverage. By building on comprehensive, large datasets of Twitter activity (as of early October, our combined datasets comprise some 3.8 million tweets) which we process and analyse using custom-designed social media analytics tools, and by using our initial quantitative analysis to guide further qualitative evaluation of Twitter activity around these campaign accounts, we are able to provide an in-depth picture of the use of Twitter in political campaigning during the 2012 US election which will provide detailed new insights social media use in contemporary elections. This analysis will then also be able to serve as a touchstone for the analysis of social media use in subsequent elections, in the USA as well as in other developed nations where Twitter and other social media platforms are utilised in electioneering.