783 resultados para open data value chain


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. Results We have implemented an extension of Chado – the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Conclusions Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans webcite.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract Background Propolis is a natural product of plant resins collected by honeybees (Apis mellifera) from various plant sources. Our previous studies indicated that propolis sensitivity is dependent on the mitochondrial function and that vacuolar acidification and autophagy are important for yeast cell death caused by propolis. Here, we extended our understanding of propolis-mediated cell death in the yeast Saccharomyces cerevisiae by applying systems biology tools to analyze the transcriptional profiling of cells exposed to propolis. Methods We have used transcriptional profiling of S. cerevisiae exposed to propolis. We validated our findings by using real-time PCR of selected genes. Systems biology tools (physical protein-protein interaction [PPPI] network) were applied to analyse the propolis-induced transcriptional bevavior, aiming to identify which pathways are modulated by propolis in S. cerevisiae and potentially influencing cell death. Results We were able to observe 1,339 genes modulated in at least one time point when compared to the reference time (propolis untreated samples) (t-test, p-value 0.01). Enrichment analysis performed by Gene Ontology (GO) Term finder tool showed enrichment for several biological categories among the genes up-regulated in the microarray hybridization such as transport and transmembrane transport and response to stress. Real-time RT-PCR analysis of selected genes showed by our microarray hybridization approach was capable of providing information about S. cerevisiae gene expression modulation with a considerably high level of confidence. Finally, a physical protein-protein (PPPI) network design and global topological analysis stressed the importance of these pathways in response of S. cerevisiae to propolis and were correlated with the transcriptional data obtained thorough the microarray analysis. Conclusions In summary, our data indicate that propolis is largely affecting several pathways in the eukaryotic cell. However, the most prominent pathways are related to oxidative stress, mitochondrial electron transport chain, vacuolar acidification, regulation of macroautophagy associated with protein target to vacuole, cellular response to starvation, and negative regulation of transcription from RNA polymerase II promoter. Our work emphasizes again the importance of S. cerevisiae as a model system to understand at molecular level the mechanism whereby propolis causes cell death in this organism at the concentration herein tested. Our study is the first one that investigates systematically by using functional genomics how propolis influences and modulates the mRNA abundance of an organism and may stimulate further work on the propolis-mediated cell death mechanisms in fungi.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

[EN] OBJECTIVE: To determine the value of ultrasonography in the assessment of patients with idiopathic carpal tunnel syndrome (CTS) and poor outcome after carpal tunnel release. METHODS: A total of 88 consecutive patients with CTS (104 hands) underwent open surgical release of the median nerve. Ultrasound (US) examination was performed blind to any patient's data. The median nerve area at tunnel inlet and outlet, the retinaculum distance, and the flattening ratio were measured. The main outcome variable was the patient's overall satisfaction using a five-point Likert scale (1 = worse, 2 = no change, 3 = slightly better, 4 = much better, 5 = cured) at 3 months postoperatively. Pre- and postoperative ultrasonographic findings in relation to clinical outcome were analysed. RESULTS: Improvement (scores 4 or 5 on the Likert scale) was recorded in 75 hands (72%). After carpal tunnel release, the cross-sectional area at tunnel inlet decreased from a mean of 14.2 to 13.3 mm2 in the group with clinical improvement and also from a mean of 12.5 to 11.6 mm2 in the group with no change or slight improvement. No significant changes in the cross-sectional area at tunnel outlet, retinaculum distance, and flattening ratio were observed. CONCLUSION: Reduction of the median nerve cross-sectional area at tunnel inlet at 3 months after carpal tunnel release was similar in patients reporting cure or great improvement and in those with slight or no improvement. Ultrasonography is of limited value in assessment of patients with poor outcome after median nerve release.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In the past decade, the advent of efficient genome sequencing tools and high-throughput experimental biotechnology has lead to enormous progress in the life science. Among the most important innovations is the microarray tecnology. It allows to quantify the expression for thousands of genes simultaneously by measurin the hybridization from a tissue of interest to probes on a small glass or plastic slide. The characteristics of these data include a fair amount of random noise, a predictor dimension in the thousand, and a sample noise in the dozens. One of the most exciting areas to which microarray technology has been applied is the challenge of deciphering complex disease such as cancer. In these studies, samples are taken from two or more groups of individuals with heterogeneous phenotypes, pathologies, or clinical outcomes. these samples are hybridized to microarrays in an effort to find a small number of genes which are strongly correlated with the group of individuals. Eventhough today methods to analyse the data are welle developed and close to reach a standard organization (through the effort of preposed International project like Microarray Gene Expression Data -MGED- Society [1]) it is not unfrequant to stumble in a clinician's question that do not have a compelling statistical method that could permit to answer it.The contribution of this dissertation in deciphering disease regards the development of new approaches aiming at handle open problems posed by clinicians in handle specific experimental designs. In Chapter 1 starting from a biological necessary introduction, we revise the microarray tecnologies and all the important steps that involve an experiment from the production of the array, to the quality controls ending with preprocessing steps that will be used into the data analysis in the rest of the dissertation. While in Chapter 2 a critical review of standard analysis methods are provided stressing most of problems that In Chapter 3 is introduced a method to adress the issue of unbalanced design of miacroarray experiments. In microarray experiments, experimental design is a crucial starting-point for obtaining reasonable results. In a two-class problem, an equal or similar number of samples it should be collected between the two classes. However in some cases, e.g. rare pathologies, the approach to be taken is less evident. We propose to address this issue by applying a modified version of SAM [2]. MultiSAM consists in a reiterated application of a SAM analysis, comparing the less populated class (LPC) with 1,000 random samplings of the same size from the more populated class (MPC) A list of the differentially expressed genes is generated for each SAM application. After 1,000 reiterations, each single probe given a "score" ranging from 0 to 1,000 based on its recurrence in the 1,000 lists as differentially expressed. The performance of MultiSAM was compared to the performance of SAM and LIMMA [3] over two simulated data sets via beta and exponential distribution. The results of all three algorithms over low- noise data sets seems acceptable However, on a real unbalanced two-channel data set reagardin Chronic Lymphocitic Leukemia, LIMMA finds no significant probe, SAM finds 23 significantly changed probes but cannot separate the two classes, while MultiSAM finds 122 probes with score >300 and separates the data into two clusters by hierarchical clustering. We also report extra-assay validation in terms of differentially expressed genes Although standard algorithms perform well over low-noise simulated data sets, multi-SAM seems to be the only one able to reveal subtle differences in gene expression profiles on real unbalanced data. In Chapter 4 a method to adress similarities evaluation in a three-class prblem by means of Relevance Vector Machine [4] is described. In fact, looking at microarray data in a prognostic and diagnostic clinical framework, not only differences could have a crucial role. In some cases similarities can give useful and, sometimes even more, important information. The goal, given three classes, could be to establish, with a certain level of confidence, if the third one is similar to the first or the second one. In this work we show that Relevance Vector Machine (RVM) [2] could be a possible solutions to the limitation of standard supervised classification. In fact, RVM offers many advantages compared, for example, with his well-known precursor (Support Vector Machine - SVM [3]). Among these advantages, the estimate of posterior probability of class membership represents a key feature to address the similarity issue. This is a highly important, but often overlooked, option of any practical pattern recognition system. We focused on Tumor-Grade-three-class problem, so we have 67 samples of grade I (G1), 54 samples of grade 3 (G3) and 100 samples of grade 2 (G2). The goal is to find a model able to separate G1 from G3, then evaluate the third class G2 as test-set to obtain the probability for samples of G2 to be member of class G1 or class G3. The analysis showed that breast cancer samples of grade II have a molecular profile more similar to breast cancer samples of grade I. Looking at the literature this result have been guessed, but no measure of significance was gived before.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Subduction zones are the favorite places to generate tsunamigenic earthquakes, where friction between oceanic and continental plates causes the occurrence of a strong seismicity. The topics and the methodologies discussed in this thesis are focussed to the understanding of the rupture process of the seismic sources of great earthquakes that generate tsunamis. The tsunamigenesis is controlled by several kinematical characteristic of the parent earthquake, as the focal mechanism, the depth of the rupture, the slip distribution along the fault area and by the mechanical properties of the source zone. Each of these factors plays a fundamental role in the tsunami generation. Therefore, inferring the source parameters of tsunamigenic earthquakes is crucial to understand the generation of the consequent tsunami and so to mitigate the risk along the coasts. The typical way to proceed when we want to gather information regarding the source process is to have recourse to the inversion of geophysical data that are available. Tsunami data, moreover, are useful to constrain the portion of the fault area that extends offshore, generally close to the trench that, on the contrary, other kinds of data are not able to constrain. In this thesis I have discussed the rupture process of some recent tsunamigenic events, as inferred by means of an inverse method. I have presented the 2003 Tokachi-Oki (Japan) earthquake (Mw 8.1). In this study the slip distribution on the fault has been inferred by inverting tsunami waveform, GPS, and bottom-pressure data. The joint inversion of tsunami and geodetic data has revealed a much better constrain for the slip distribution on the fault rather than the separate inversions of single datasets. Then we have studied the earthquake occurred on 2007 in southern Sumatra (Mw 8.4). By inverting several tsunami waveforms, both in the near and in the far field, we have determined the slip distribution and the mean rupture velocity along the causative fault. Since the largest patch of slip was concentrated on the deepest part of the fault, this is the likely reason for the small tsunami waves that followed the earthquake, pointing out how much the depth of the rupture plays a crucial role in controlling the tsunamigenesis. Finally, we have presented a new rupture model for the great 2004 Sumatra earthquake (Mw 9.2). We have performed the joint inversion of tsunami waveform, GPS and satellite altimetry data, to infer the slip distribution, the slip direction, and the rupture velocity on the fault. Furthermore, in this work we have presented a novel method to estimate, in a self-consistent way, the average rigidity of the source zone. The estimation of the source zone rigidity is important since it may play a significant role in the tsunami generation and, particularly for slow earthquakes, a low rigidity value is sometimes necessary to explain how a relatively low seismic moment earthquake may generate significant tsunamis; this latter point may be relevant for explaining the mechanics of the tsunami earthquakes, one of the open issues in present day seismology. The investigation of these tsunamigenic earthquakes has underlined the importance to use a joint inversion of different geophysical data to determine the rupture characteristics. The results shown here have important implications for the implementation of new tsunami warning systems – particularly in the near-field – the improvement of the current ones, and furthermore for the planning of the inundation maps for tsunami-hazard assessment along the coastal area.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the main goals of the COMPASS experiment at CERN is the determination of the gluon polarisation in the nucleon. It is determined from spin asymmetries in the scattering of 160 GeV/c polarised muons on a polarised LiD target. The gluon polarisation is accessed by the selection of photon-gluon fusion (PGF) events. The PGF-process can be tagged through hadrons with high transverse momenta or through charmed hadrons in the final state. The advantage of the open charm channel is that, in leading order, the PGF-process is the only process for charm production, thus no physical background contributes to the selected data sample. This thesis presents a measurement of the gluon polarisation from the COMPASS data taken in the years 2002-2004. In the analysis, charm production is tagged through a reconstructed D0-meson decaying in $D^{0}-> K^{-}pi^{+}$ (and charge conjugates). The reconstruction is done on a combinatorial basis. The background of wrong track pairs is reduced using kinematic cuts to the reconstructed D0-candidate and the information on particle identification from the Ring Imaging Cerenkov counter. In addition, the event sample is separated into D0-candidates, where a soft pion from the decay of the D*-meson to a D0-meson, is found, and the D0-candidates without this tag. Due to the small mass difference between D*-meson and D0-meson the signal purity of the D*-tagged sample is about 7 times higher than in the untagged sample. The gluon polarisation is measured from the event asymmetries for the for the different spin configurations of the COMPASS target. To improve the statistical precision of the final results, the events in the final sample are weighted. This method results in an average value of the gluon polarisation in the x-range covered by the data. For the COMPASS data from 2002-2004, the resulting value of the gluon polarisation is $=-0.47+-0.44 (stat)+-0.15(syst.)$. The result is statistically compatible with the existing measurements of $$ in the high-pT channel. Compared to these, the open charm measurement has the advantage of a considerably smaller model dependence.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Information management is a key aspect of successful construction projects. Having inaccurate measurements and conflicting data can lead to costly mistakes, and vague quantities can ruin estimates and schedules. Building information modeling (BIM) augments a 3D model with a wide variety of information, which reduces many sources of error and can detect conflicts before they occur. Because new technology is often more complex, it can be difficult to effectively integrate it with existing business practices. In this paper, we will answer two questions: How can BIM add value to construction projects? and What lessons can be learned from other companies that use BIM or other similar technology? Previous research focused on the technology as if it were simply a tool, observing problems that occurred while integrating new technology into existing practices. Our research instead looks at the flow of information through a company and its network, seeing all the actors as part of an ecosystem. Building upon this idea, we proposed the metaphor of an information supply chain to illustrate how BIM can add value to a construction project. This paper then concludes with two case studies. The first case study illustrates a failure in the flow of information that could have prevented by using BIM. The second case study profiles a leading design firm that has used BIM products for many years and shows the real benefits of using this program.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Demand for bio-fuels is expected to increase, due to rising prices of fossil fuels and concerns over greenhouse gas emissions and energy security. The overall cost of biomass energy generation is primarily related to biomass harvesting activity, transportation, and storage. With a commercial-scale cellulosic ethanol processing facility in Kinross Township of Chippewa County, Michigan about to be built, models including a simulation model and an optimization model have been developed to provide decision support for the facility. Both models track cost, emissions and energy consumption. While the optimization model provides guidance for a long-term strategic plan, the simulation model aims to present detailed output for specified operational scenarios over an annual period. Most importantly, the simulation model considers the uncertainty of spring break-up timing, i.e., seasonal road restrictions. Spring break-up timing is important because it will impact the feasibility of harvesting activity and the time duration of transportation restrictions, which significantly changes the availability of feedstock for the processing facility. This thesis focuses on the statistical model of spring break-up used in the simulation model. Spring break-up timing depends on various factors, including temperature, road conditions and soil type, as well as individual decision making processes at the county level. The spring break-up model, based on the historical spring break-up data from 27 counties over the period of 2002-2010, starts by specifying the probability distribution of a particular county’s spring break-up start day and end day, and then relates the spring break-up timing of the other counties in the harvesting zone to the first county. In order to estimate the dependence relationship between counties, regression analyses, including standard linear regression and reduced major axis regression, are conducted. Using realizations (scenarios) of spring break-up generated by the statistical spring breakup model, the simulation model is able to probabilistically evaluate different harvesting and transportation plans to help the bio-fuel facility select the most effective strategy. For early spring break-up, which usually indicates a longer than average break-up period, more log storage is required, total cost increases, and the probability of plant closure increases. The risk of plant closure may be partially offset through increased use of rail transportation, which is not subject to spring break-up restrictions. However, rail availability and rail yard storage may then become limiting factors in the supply chain. Rail use will impact total cost, energy consumption, system-wide CO2 emissions, and the reliability of providing feedstock to the bio-fuel processing facility.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: Chronic meningococcemia (CM) is a diagnostic challenge. Skin lesions are frequent but in most cases nonspecific. Polymerase chain reaction (PCR)-based diagnosis has been validated in blood and cerebrospinal fluid for acute Neisseria meningitidis infection, in patients in whom routine microbiologic tests have failed to isolate the bacteria. In 2 patients with CM, we established the diagnosis by a newly developed PCR-based approach performed on skin biopsy specimens. OBSERVATIONS: Two patients presented with fever together with systemic and cutaneous manifestations suggestive of CM. Although findings from blood cultures remained negative, we were able to identify N meningitidis in the skin lesions by a newly developed PCR assay. In 1 patient, an N meningitidis strain of the same serogroup was also isolated from a throat swab specimen. Both patients rapidly improved after appropriate antibiotherapy. CONCLUSIONS: To our knowledge, we report the first cases of CM diagnosed by PCR testing on skin biopsy specimens. It is noteworthy that, although N meningitidis-specific PCR is highly sensitive in blood and cerebrospinal fluid in acute infections, our observations underscore the usefulness of PCR performed on skin lesions for the diagnosis of chronic N meningitidis infections. Whenever possible, this approach should be systematically employed in patients for whom N meningitidis infection cannot be confirmed by routine microbiologic investigations.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Research and professional practices have the joint aim of re-structuring the preconceived notions of reality. They both want to gain the understanding about social reality. Social workers use their professional competence in order to grasp the reality of their clients, while researchers’ pursuit is to open the secrecies of the research material. Development and research are now so intertwined and inherent in almost all professional practices that making distinctions between practising, developing and researching has become difficult and in many aspects irrelevant. Moving towards research-based practices is possible and it is easily applied within the framework of the qualitative research approach (Dominelli 2005, 235; Humphries 2005, 280). Social work can be understood as acts and speech acts crisscrossing between social workers and clients. When trying to catch the verbal and non-verbal hints of each others’ behaviour, the actors have to do a lot of interpretations in a more or less uncertain mental landscape. Our point of departure is the idea that the study of social work practices requires tools which effectively reveal the internal complexity of social work (see, for example, Adams & Dominelli & Payne 2005, 294 – 295). The boom of qualitative research methodologies in recent decades is associated with much profound the rupture in humanities, which is called the linguistic turn (Rorty 1967). The idea that language is not transparently mediating our perceptions and thoughts about reality, but on the contrary it constitutes it was new and even confusing to many social scientists. Nowadays we have got used to read research reports which have applied different branches of discursive analyses or narratologic or semiotic approaches. Although differences are sophisticated between those orientations they share the idea of the predominance of language. Despite the lively research work of today’s social work and the research-minded atmosphere of social work practice, semiotics has rarely applied in social work research. However, social work as a communicative practice concerns symbols, metaphors and all kinds of the representative structures of language. Those items are at the core of semiotics, the science of signs, and the science which examines people using signs in their mutual interaction and their endeavours to make the sense of the world they live in, their semiosis. When thinking of the practice of social work and doing the research of it, a number of interpretational levels ought to be passed before reaching the research phase in social work. First of all, social workers have to interpret their clients’ situations, which will be recorded in the files. In some very rare cases those past situations will be reflected in discussions or perhaps interviews or put under the scrutiny of some researcher in the future. Each and every new observation adds its own flavour to the mixture of meanings. Social workers have combined their observations with previous experience and professional knowledge, furthermore, the situation on hand also influences the reactions. In addition, the interpretations made by social workers over the course of their daily working routines are never limited to being part of the personal process of the social worker, but are also always inherently cultural. The work aiming at social change is defined by the presence of an initial situation, a specific goal, and the means and ways of achieving it, which are – or which should be – agreed upon by the social worker and the client in situation which is unique and at the same time socially-driven. Because of the inherent plot-based nature of social work, the practices related to it can be analysed as stories (see Dominelli 2005, 234), given, of course, that they are signifying and told by someone. The research of the practices is concentrating on impressions, perceptions, judgements, accounts, documents etc. All these multifarious elements can be scrutinized as textual corpora, but not whatever textual material. In semiotic analysis, the material studied is characterised as verbal or textual and loaded with meanings. We present a contribution of research methodology, semiotic analysis, which has to our mind at least implicitly references to the social work practices. Our examples of semiotic interpretation have been picked up from our dissertations (Laine 2005; Saurama 2002). The data are official documents from the archives of a child welfare agency and transcriptions of the interviews of shelter employees. These data can be defined as stories told by the social workers of what they have seen and felt. The official documents present only fragmentations and they are often written in passive form. (Saurama 2002, 70.) The interviews carried out in the shelters can be described as stories where the narrators are more familiar and known. The material is characterised by the interaction between the interviewer and interviewee. The levels of the story and the telling of the story become apparent when interviews or documents are examined with the use of semiotic tools. The roots of semiotic interpretation can be found in three different branches; the American pragmatism, Saussurean linguistics in Paris and the so called formalism in Moscow and Tartu; however in this paper we are engaged with the so called Parisian School of semiology which prominent figure was A. J. Greimas. The Finnish sociologists Pekka Sulkunen and Jukka Törrönen (1997a; 1997b) have further developed the ideas of Greimas in their studies on socio-semiotics, and we lean on their ideas. In semiotics social reality is conceived as a relationship between subjects, observations, and interpretations and it is seen mediated by natural language which is the most common sign system among human beings (Mounin 1985; de Saussure 2006; Sebeok 1986). Signification is an act of associating an abstract context (signified) to some physical instrument (signifier). These two elements together form the basic concept, the “sign”, which never constitutes any kind of meaning alone. The meaning will be comprised in a distinction process where signs are being related to other signs. In this chain of signs, the meaning becomes diverged from reality. (Greimas 1980, 28; Potter 1996, 70; de Saussure 2006, 46-48.) One interpretative tool is to think of speech as a surface under which deep structures – i.e. values and norms – exist (Greimas & Courtes 1982; Greimas 1987). To our mind semiotics is very much about playing with two different levels of text: the syntagmatic surface which is more or less faithful to the grammar, and the paradigmatic, semantic structure of values and norms hidden in the deeper meanings of interpretations. Semiotic analysis deals precisely with the level of meaning which exists under the surface, but the only way to reach those meanings is through the textual level, the written or spoken text. That is why the tools are needed. In our studies, we have used the semiotic square and the actant analysis. The former is based on the distinctions and the categorisations of meanings, and the latter on opening the plotting of narratives in order to reach the value structures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

OBJECTIVE To determine the diagnostic value of a serologic microagglutination test (MAT) and a PCR assay on urine and blood for the diagnosis of leptospirosis in dogs with acute kidney injury (AKI). DESIGN Cross-sectional study. Animals-76 dogs with AKI in a referral hospital (2008 to 2009). PROCEDURES Dogs' leptospirosis status was defined with a paired serologic MAT against a panel of 11 Leptospira serovars as leptospirosis-associated (n = 30) or nonleptospirosis-associated AKI (12). In 34 dogs, convalescent serologic testing was not possible, and leptospirosis status was classified as undetermined. The diagnostic value of the MAT single acute or convalescent blood sample was determined in dogs in which leptospirosis status could be classified. The diagnostic value of a commercially available genus-specific PCR assay was evaluated by use of 36 blood samples and 20 urine samples. RESULTS Serologic acute testing of an acute blood sample had a specificity of 100% (95% CI, 76% to 100%), a sensitivity of 50% (33% to 67%), and an accuracy of 64% (49% to 77%). Serologic testing of a convalescent blood sample had a specificity of 92% (65% to 99%), a sensitivity of 100% (87% to 100%), and an accuracy of 98% (88% to 100%). Results of the Leptospira PCR assay were negative for all samples from dogs for which leptospirosis status could be classified. CONCLUSIONS AND CLINICAL RELEVANCE Serologic MAT results were highly accurate for diagnosis of leptospirosis in dogs, despite a low sensitivity for early diagnosis. In this referral setting of dogs pretreated with antimicrobials, testing of blood and urine samples with a commercially available genus-specific PCR assay did not improve early diagnosis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Molybdenum isotopes are increasingly widely applied in Earth Sciences. They are primarily used to investigate the oxygenation of Earth's ocean and atmosphere. However, more and more fields of application are being developed, such as magmatic and hydrothermal processes, planetary sciences or the tracking of environmental pollution. Here, we present a proposal for a unifying presentation of Mo isotope ratios in the studies of mass-dependent isotope fractionation. We suggest that the δ98/95Mo of the NIST SRM 3134 be defined as +0.25‰. The rationale is that the vast majority of published data are presented relative to reference materials that are similar, but not identical, and that are all slightly lighter than NIST SRM 3134. Our proposed data presentation allows a direct first-order comparison of almost all old data with future work while referring to an international measurement standard. In particular, canonical δ98/95Mo values such as +2.3‰ for seawater and −0.7‰ for marine Fe–Mn precipitates can be kept for discussion. As recent publications show that the ocean molybdenum isotope signature is homogeneous, the IAPSO ocean water standard or any other open ocean water sample is suggested as a secondary measurement standard, with a defined δ98/95Mo value of +2.34 ± 0.10‰ (2s). Les isotopes du molybdène (Mo) sont de plus en plus largement utilisés dans les sciences de la Terre. Ils sont principalement utilisés pour étudier l'oxygénation de l'océan et de l'atmosphère de la Terre. Cependant, de plus en plus de domaines d'application sont en cours de développement, tels que ceux concernant les processus magmatiques et hydrothermaux, les sciences planétaires ou encore le suivi de la pollution environnementale. Ici, nous présentons une proposition de présentation unifiée des rapports isotopiques du Mo dans les études du fractionnement isotopique dépendant de la masse. Nous suggérons que le δ98/95Mo du NIST SRM 3134 soit définit comme étant égal à +0.25 ‰. La raison est que la grande majorité des données publiées sont présentés par rapport à des matériaux de référence qui sont similaires, mais pas identiques, et qui sont tous légèrement plus léger que le NIST SRM 3134. Notre proposition de présentation des données permet une comparaison directe au premier ordre de presque toutes les anciennes données avec les travaux futurs en se référant à un standard international. En particulier, les valeurs canoniques du δ98/95Mo comme celle de +2,3 ‰ pour l'eau de mer et de -0,7 ‰ pour les précipités de Fe-Mn marins peuvent être conservés pour la discussion. Comme les publications récentes montrent que la signature isotopique moyenne du molybdène de l'océan est homogène, le standard de l'eau océanique IAPSO ou tout autre échantillon d'eau provenant de l'océan ouvert sont proposé comme standards secondaires, avec une valeur définie du δ98/95 Mo de 2.34 ± 0.10 ‰ (2s).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

K-feldspar (Kfs) from the Chain of Ponds Pluton (CPP) is the archetypal reference material, on which thermochronological modeling of Ar diffusion in discrete “domains” was founded. We re-examine the CPP Kfs using cathodoluminescence and back-scattered electron imaging, transmission electron microscopy, and electron probe microanalysis. 40Ar/39Ar stepwise heating experiments on different sieve fractions, and on handpicked and unpicked aliquots, are compared. Our results reproduce the staircase-shaped age spectrum and the Arrhenius trajectory of the literature sample, confirming that samples collected from the same locality have an identical Ar isotope record. Even the most pristine-looking Kfs from the CPP contains successive generations of secondary, metasomatic/retrograde mineral replacements that post-date magmatic crystallization. These chemically and chronologically distinct phases are responsible for its staircase-shaped age spectra, which are modified by handpicking. While genuine within-grain diffusion gradients are not ruled out by these data, this study demonstrates that the most important control on staircase-shaped age spectra is the simultaneous presence of heterochemical, diachronous post-magmatic mineral growth. At least five distinct mineral species were identified in the Kfs separate, three of which can be traced to external fluids interacting with the CPP in a chemically open system. Sieve fractions have size-shifted Arrhenius trajectories, negating the existence of the smallest “diffusion domains”. Heterochemical phases also play an important role in producing non-linear trajectories. In vacuo degassing rates recovered from Arrhenius plots are neither related to true Fick’s Law diffusion nor to the staircase shape of the age spectra. The CPP Kfs used to define the "diffusion domain" model demonstrates the predominance of metasomatic alteration by hydrothermal fluids and recrystallization in establishing the natural Ar distribution amongst different coexisting phases that gives rise to the staircase-shaped age spectrum. Microbeam imaging of textures is as essential for 40Ar-39Ar hygrochronology as it is for U-Pb geochronology.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND High-dose chemotherapy (HDCT) with autologous stem cell transplantation (ASCT) has been reported to confer better prognosis in systemic light chain AL-amyloidosis as compared with conventional chemotherapy. However, only limited data are available so far on treatment and outcome of AL-amyloidosis patients in Switzerland. METHODS Within a single-centre cohort of patients with biopsy confirmed AL-amyloidosis diagnosed between January 1995 and December 2012, we aimed to investigate treatment effects in patients treated with conventional chemotherapy versus HDCT with ASCT. RESULTS We identified 50 patients with AL-amyloidosis treated with conventional chemotherapy and 13 patients who received HDCT with ASCT. Clinical characteristics differed between the groups for the age of the patients (59 years for patients with ASCT/HDCT vs 69 years; p= 0.0006) and the troponin-T value (0.015 μg/l vs 0.08 μg/l; p = 0.0279). Patients with ASCT showed a trend towards better overall survival, with median survival not yet reached compared with 53 months in patients on conventional chemotherapy (p = 0.0651). CONCLUSION Our results suggest that light chain AL-amyloidosis patients considered fit to undergo HDCT and ASCT may have a better outcome than patients treated exclusively with conventional chemotherapy regimens; however, the better performance status of patients receiving HDCT may have added to this treatment effect.