26 resultados para Construction. Indicators System. Performance. Ergonomics. Validation

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To review and update the conceptual framework, indicator content and research priorities of the Organisation for Economic Cooperation and Development's (OECD) Health Care Quality Indicators (HCQI) project, after a decade of collaborative work. DESIGN: A structured assessment was carried out using a modified Delphi approach, followed by a consensus meeting, to assess the suite of HCQI for international comparisons, agree on revisions to the original framework and set priorities for research and development. SETTING: International group of countries participating to OECD projects. PARTICIPANTS: Members of the OECD HCQI expert group. RESULTS: A reference matrix, based on a revised performance framework, was used to map and assess all seventy HCQI routinely calculated by the OECD expert group. A total of 21 indicators were agreed to be excluded, due to the following concerns: (i) relevance, (ii) international comparability, particularly where heterogeneous coding practices might induce bias, (iii) feasibility, when the number of countries able to report was limited and the added value did not justify sustained effort and (iv) actionability, for indicators that were unlikely to improve on the basis of targeted policy interventions. CONCLUSIONS: The revised OECD framework for HCQI represents a new milestone of a long-standing international collaboration among a group of countries committed to building common ground for performance measurement. The expert group believes that the continuation of this work is paramount to provide decision makers with a validated toolbox to directly act on quality improvement strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Achieving a high degree of dependability in complex macro-systems is challenging. Because of the large number of components and numerous independent teams involved, an overview of the global system performance is usually lacking to support both design and operation adequately. A functional failure mode, effects and criticality analysis (FMECA) approach is proposed to address the dependability optimisation of large and complex systems. The basic inductive model FMECA has been enriched to include considerations such as operational procedures, alarm systems. environmental and human factors, as well as operation in degraded mode. Its implementation on a commercial software tool allows an active linking between the functional layers of the system and facilitates data processing and retrieval, which enables to contribute actively to the system optimisation. The proposed methodology has been applied to optimise dependability in a railway signalling system. Signalling systems are typical example of large complex systems made of multiple hierarchical layers. The proposed approach appears appropriate to assess the global risk- and availability-level of the system as well as to identify its vulnerabilities. This enriched-FMECA approach enables to overcome some of the limitations and pitfalls previously reported with classical FMECA approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The value of earmarks as an efficient means of personal identification is still subject to debate. It has been argued that the field is lacking a firm systematic and structured data basis to help practitioners to form their conclusions. Typically, there is a paucity of research guiding as to the selectivity of the features used in the comparison process between an earmark and reference earprints taken from an individual. This study proposes a system for the automatic comparison of earprints and earmarks, operating without any manual extraction of key-points or manual annotations. For each donor, a model is created using multiple reference prints, hence capturing the donor within source variability. For each comparison between a mark and a model, images are automatically aligned and a proximity score, based on a normalized 2D correlation coefficient, is calculated. Appropriate use of this score allows deriving a likelihood ratio that can be explored under known state of affairs (both in cases where it is known that the mark has been left by the donor that gave the model and conversely in cases when it is established that the mark originates from a different source). To assess the system performance, a first dataset containing 1229 donors elaborated during the FearID research project was used. Based on these data, for mark-to-print comparisons, the system performed with an equal error rate (EER) of 2.3% and about 88% of marks are found in the first 3 positions of a hitlist. When performing print-to-print transactions, results show an equal error rate of 0.5%. The system was then tested using real-case data obtained from police forces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work we present a first feasibility study of the ClearPEM technology for simultaneous PET-MR imaging. The mutual electromagnetic interference (EMI) effects between both systems were evaluated on a 7 T magnet by characterizing the response behavior of the ClearPEM detectors and front-end electronics to pulsed RF power and switched magnetic field gradients; and by analyzing the MR system performance degradation from noise pickup into the RF receiver chain, and from magnetic susceptibility artifacts caused by PET front-end materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present the segmentation of the headand neck lymph node regions using a new active contourbased atlas registration model. We propose to segment thelymph node regions without directly including them in theatlas registration process; instead, they are segmentedusing the dense deformation field computed from theregistration of the atlas structures with distinctboundaries. This approach results in robust and accuratesegmentation of the lymph node regions even in thepresence of significant anatomical variations between theatlas-image and the patient's image to be segmented. Wealso present a quantitative evaluation of lymph noderegions segmentation using various statistical as well asgeometrical metrics: sensitivity, specificity, dicesimilarity coefficient and Hausdorff distance. Acomparison of the proposed method with two other state ofthe art methods is presented. The robustness of theproposed method to the atlas selection, in segmenting thelymph node regions, is also evaluated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dans cette thèse, nous étudions les aspects comportementaux d'agents qui interagissent dans des systèmes de files d'attente à l'aide de modèles de simulation et de méthodologies expérimentales. Chaque période les clients doivent choisir un prestataire de servivce. L'objectif est d'analyser l'impact des décisions des clients et des prestataires sur la formation des files d'attente. Dans un premier cas nous considérons des clients ayant un certain degré d'aversion au risque. Sur la base de leur perception de l'attente moyenne et de la variabilité de cette attente, ils forment une estimation de la limite supérieure de l'attente chez chacun des prestataires. Chaque période, ils choisissent le prestataire pour lequel cette estimation est la plus basse. Nos résultats indiquent qu'il n'y a pas de relation monotone entre le degré d'aversion au risque et la performance globale. En effet, une population de clients ayant un degré d'aversion au risque intermédiaire encoure généralement une attente moyenne plus élevée qu'une population d'agents indifférents au risque ou très averses au risque. Ensuite, nous incorporons les décisions des prestataires en leur permettant d'ajuster leur capacité de service sur la base de leur perception de la fréquence moyenne d'arrivées. Les résultats montrent que le comportement des clients et les décisions des prestataires présentent une forte "dépendance au sentier". En outre, nous montrons que les décisions des prestataires font converger l'attente moyenne pondérée vers l'attente de référence du marché. Finalement, une expérience de laboratoire dans laquelle des sujets jouent le rôle de prestataire de service nous a permis de conclure que les délais d'installation et de démantèlement de capacité affectent de manière significative la performance et les décisions des sujets. En particulier, les décisions du prestataire, sont influencées par ses commandes en carnet, sa capacité de service actuellement disponible et les décisions d'ajustement de capacité qu'il a prises, mais pas encore implémentées. - Queuing is a fact of life that we witness daily. We all have had the experience of waiting in line for some reason and we also know that it is an annoying situation. As the adage says "time is money"; this is perhaps the best way of stating what queuing problems mean for customers. Human beings are not very tolerant, but they are even less so when having to wait in line for service. Banks, roads, post offices and restaurants are just some examples where people must wait for service. Studies of queuing phenomena have typically addressed the optimisation of performance measures (e.g. average waiting time, queue length and server utilisation rates) and the analysis of equilibrium solutions. The individual behaviour of the agents involved in queueing systems and their decision making process have received little attention. Although this work has been useful to improve the efficiency of many queueing systems, or to design new processes in social and physical systems, it has only provided us with a limited ability to explain the behaviour observed in many real queues. In this dissertation we differ from this traditional research by analysing how the agents involved in the system make decisions instead of focusing on optimising performance measures or analysing an equilibrium solution. This dissertation builds on and extends the framework proposed by van Ackere and Larsen (2004) and van Ackere et al. (2010). We focus on studying behavioural aspects in queueing systems and incorporate this still underdeveloped framework into the operations management field. In the first chapter of this thesis we provide a general introduction to the area, as well as an overview of the results. In Chapters 2 and 3, we use Cellular Automata (CA) to model service systems where captive interacting customers must decide each period which facility to join for service. They base this decision on their expectations of sojourn times. Each period, customers use new information (their most recent experience and that of their best performing neighbour) to form expectations of sojourn time at the different facilities. Customers update their expectations using an adaptive expectations process to combine their memory and their new information. We label "conservative" those customers who give more weight to their memory than to the xiv Summary new information. In contrast, when they give more weight to new information, we call them "reactive". In Chapter 2, we consider customers with different degree of risk-aversion who take into account uncertainty. They choose which facility to join based on an estimated upper-bound of the sojourn time which they compute using their perceptions of the average sojourn time and the level of uncertainty. We assume the same exogenous service capacity for all facilities, which remains constant throughout. We first analyse the collective behaviour generated by the customers' decisions. We show that the system achieves low weighted average sojourn times when the collective behaviour results in neighbourhoods of customers loyal to a facility and the customers are approximately equally split among all facilities. The lowest weighted average sojourn time is achieved when exactly the same number of customers patronises each facility, implying that they do not wish to switch facility. In this case, the system has achieved the Nash equilibrium. We show that there is a non-monotonic relationship between the degree of risk-aversion and system performance. Customers with an intermediate degree of riskaversion typically achieve higher sojourn times; in particular they rarely achieve the Nash equilibrium. Risk-neutral customers have the highest probability of achieving the Nash Equilibrium. Chapter 3 considers a service system similar to the previous one but with risk-neutral customers, and relaxes the assumption of exogenous service rates. In this sense, we model a queueing system with endogenous service rates by enabling managers to adjust the service capacity of the facilities. We assume that managers do so based on their perceptions of the arrival rates and use the same principle of adaptive expectations to model these perceptions. We consider service systems in which the managers' decisions take time to be implemented. Managers are characterised by a profile which is determined by the speed at which they update their perceptions, the speed at which they take decisions, and how coherent they are when accounting for their previous decisions still to be implemented when taking their next decision. We find that the managers' decisions exhibit a strong path-dependence: owing to the initial conditions of the model, the facilities of managers with identical profiles can evolve completely differently. In some cases the system becomes "locked-in" into a monopoly or duopoly situation. The competition between managers causes the weighted average sojourn time of the system to converge to the exogenous benchmark value which they use to estimate their desired capacity. Concerning the managers' profile, we found that the more conservative Summary xv a manager is regarding new information, the larger the market share his facility achieves. Additionally, the faster he takes decisions, the higher the probability that he achieves a monopoly position. In Chapter 4 we consider a one-server queueing system with non-captive customers. We carry out an experiment aimed at analysing the way human subjects, taking on the role of the manager, take decisions in a laboratory regarding the capacity of a service facility. We adapt the model proposed by van Ackere et al (2010). This model relaxes the assumption of a captive market and allows current customers to decide whether or not to use the facility. Additionally the facility also has potential customers who currently do not patronise it, but might consider doing so in the future. We identify three groups of subjects whose decisions cause similar behavioural patterns. These groups are labelled: gradual investors, lumpy investors, and random investor. Using an autocorrelation analysis of the subjects' decisions, we illustrate that these decisions are positively correlated to the decisions taken one period early. Subsequently we formulate a heuristic to model the decision rule considered by subjects in the laboratory. We found that this decision rule fits very well for those subjects who gradually adjust capacity, but it does not capture the behaviour of the subjects of the other two groups. In Chapter 5 we summarise the results and provide suggestions for further work. Our main contribution is the use of simulation and experimental methodologies to explain the collective behaviour generated by customers' and managers' decisions in queueing systems as well as the analysis of the individual behaviour of these agents. In this way, we differ from the typical literature related to queueing systems which focuses on optimising performance measures and the analysis of equilibrium solutions. Our work can be seen as a first step towards understanding the interaction between customer behaviour and the capacity adjustment process in queueing systems. This framework is still in its early stages and accordingly there is a large potential for further work that spans several research topics. Interesting extensions to this work include incorporating other characteristics of queueing systems which affect the customers' experience (e.g. balking, reneging and jockeying); providing customers and managers with additional information to take their decisions (e.g. service price, quality, customers' profile); analysing different decision rules and studying other characteristics which determine the profile of customers and managers.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

BACKGROUND: With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects") as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. FOCUS: The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. DATA: We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control') or group 2 (e.g., 'treated'). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. METHODS: We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND: To date, there is no quality assurance program that correlates patient outcome to perfusion service provided during cardiopulmonary bypass (CPB). A score was devised, incorporating objective parameters that would reflect the likelihood to influence patient outcome. The purpose was to create a new method for evaluating the quality of care the perfusionist provides during CPB procedures and to deduce whether it predicts patient morbidity and mortality. METHODS: We analysed 295 consecutive elective patients. We chose 10 parameters: fluid balance, blood transfused, Hct, ACT, PaO2, PaCO2, pH, BE, potassium and CPB time. Distribution analysis was performed using the Shapiro-Wilcoxon test. This made up the PerfSCORE and we tried to find a correlation to mortality rate, patient stay in the ICU and length of mechanical ventilation. Univariate analysis (UA) using linear regression was established for each parameter. Statistical significance was established when p < 0.05. Multivariate analysis (MA) was performed with the same parameters. RESULTS: The mean age was 63.8 +/- 12.6 years with 70% males. There were 180 CABG, 88 valves, and 27 combined CABG/valve procedures. The PerfSCORE of 6.6 +/- 2.4 (0-20), mortality of 2.7% (8/295), CPB time 100 +/- 41 min (19-313), ICU stay 52 +/- 62 hrs (7-564) and mechanical ventilation of 10.5 +/- 14.8 hrs (0-564) was calculated. CPB time, fluid balance, PaO2, PerfSCORE and blood transfused were significantly correlated to mortality (UA, p < 0.05). Also, CPB time, blood transfused and PaO2 were parameters predicting mortality (MA, p < 0.01). Only pH was significantly correlated for predicting ICU stay (UA). Ultrafiltration (UF) and CPB time were significantly correlated (UA, p < 0.01) while UF (p < 0.05) was the only parameter predicting mechanical ventilation duration (MA). CONCLUSIONS: CPB time, blood transfused and PaO2 are independent risk factors of mortality. Fluid balance, blood transfusion, PaO2, PerfSCORE and CPB time are independent parameters for predicting morbidity. PerfSCORE is a quality of perfusion measure that objectively quantifies perfusion performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Tobacco consumption is a global epidemic responsible for a vast burden of disease. With pharmacological properties sought-after by consumers and responsible for addiction issues, nicotine is the main reason of this phenomenon. Accordingly, smokeless tobacco products are of growing popularity in sport owing to potential performance enhancing properties and absence of adverse effects on the respiratory system. Nevertheless, nicotine does not appear on the 2011 World Anti-Doping Agency (WADA) Prohibited List or Monitoring Program by lack of a comprehensive large-scale prevalence survey. Thus, this work describes a one-year monitoring study on urine specimens from professional athletes of different disciplines covering 2010 and 2011. A method for the detection and quantification of nicotine, its major metabolites (cotinine, trans-3-hydroxycotinine, nicotine-N'-oxide and cotinine-N-oxide) and minor tobacco alkaloids (anabasine, anatabine and nornicotine) was developed, relying on ultra-high pressure liquid chromatography coupled to triple quadrupole mass spectrometry (UHPLC-TQ-MS/MS). A simple and fast dilute-and-shoot sample treatment was performed, followed by hydrophilic interaction chromatography-tandem mass spectrometry (HILIC-MS/MS) operated in positive electrospray ionization (ESI) mode with multiple reaction monitoring (MRM) data acquisition. After method validation, assessing the prevalence of nicotine consumption in sport involved analysis of 2185 urine samples, accounting for 43 different sports. Concentrations distribution of major nicotine metabolites, minor nicotine metabolites and tobacco alkaloids ranged from 10 (LLOQ) to 32,223, 6670 and 538 ng/mL, respectively. Compounds of interest were detected in trace levels in 23.0% of urine specimens, with concentration levels corresponding to an exposure within the last three days for 18.3% of samples. Likewise, hypothesizing conservative concentration limits for active nicotine consumption prior and/or during sport practice (50 ng/mL for nicotine, cotinine and trans-3-hydroxycotinine and 25 ng/mL for nicotine-N'-oxide, cotinine-N-oxide, anabasine, anatabine and nornicotine) revealed a prevalence of 15.3% amongst athletes. While this number may appear lower than the worldwide smoking prevalence of around 25%, focusing the study on selected sports highlighted more alarming findings. Indeed, active nicotine consumption in ice hockey, skiing, biathlon, bobsleigh, skating, football, basketball, volleyball, rugby, American football, wrestling and gymnastics was found to range between 19.0 and 55.6%. Therefore, considering the adverse effects of smoking on the respiratory tract and numerous health threats detrimental to sport practice at top level, likelihood of smokeless tobacco consumption for performance enhancement is greatly supported.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The European Surveillance of Congenital Anomalies (EUROCAT) network of population-based congenital anomaly registries is an important source of epidemiologic information on congenital anomalies in Europe covering live births, fetal deaths from 20 weeks gestation, and terminations of pregnancy for fetal anomaly. EUROCAT's policy is to strive for high-quality data, while ensuring consistency and transparency across all member registries. A set of 30 data quality indicators (DQIs) was developed to assess five key elements of data quality: completeness of case ascertainment, accuracy of diagnosis, completeness of information on EUROCAT variables, timeliness of data transmission, and availability of population denominator information. This article describes each of the individual DQIs and presents the output for each registry as well as the EUROCAT (unweighted) average, for 29 full member registries for 2004-2008. This information is also available on the EUROCAT website for previous years. The EUROCAT DQIs allow registries to evaluate their performance in relation to other registries and allows appropriate interpretations to be made of the data collected. The DQIs provide direction for improving data collection and ascertainment, and they allow annual assessment for monitoring continuous improvement. The DQI are constantly reviewed and refined to best document registry procedures and processes regarding data collection, to ensure appropriateness of DQI, and to ensure transparency so that the data collected can make a substantial and useful contribution to epidemiologic research on congenital anomalies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Syrian dry areas have been for several millennia a place of interaction between human populations and the environment. If environmental constraints and heterogeneity condition the human occupation and exploitation of resources, socio-political, economic and historical elements play a fundamental role. Since the late 1980s, Syrian dry areas are viewed as suffering a serious water crisis, due to groundwater overdraft. The Syrian administration and international development agencies believe that groundwater overexploitation is also leading to a decline of agricultural activities and to poverty increase. Action is thus required to address these problems.However, the overexploitation diagnosis needs to be reviewed. The overexploitation discourse appears in the context of Syria's opening to international organizations and to the market economy. It echoes the international discourse of "global water crisis". The diagnosis is based on national indicators recycling old Soviet data that has not been updated. In the post-Soviet era, the Syrian national water policy seems to abandon large surface water irrigation projects in favor of a strategy of water use rationalization and groundwater conservation in crisis regions, especially in the district of Salamieh.This groundwater conservation policy has a number of inconsistencies. It is justified for the administration and also probably for international donors, since it responds to an indisputable environmental emergency. However, efforts to conserve water are anecdotal or even counterproductive. The water conservation policy appears a posteriori as an extension of the national policy of food self-sufficiency. The dominant interpretation of overexploitation, and more generally of the water crisis, prevents any controversary approach of the status of resources and of the agricultural system in general and thus destroys any attempt to discuss alternatives with respect to groundwater management, allocation, and their inclusion in development programs.A revisited diagnosis of the situation needs to take into account spatial and temporal dimensions of the groundwater exploitation and to analyze the co-evolution of hydrogeological and agricultural systems. It should highlight the adjustments adopted to cope with environmental and economic variability, changes of water availability and regulatory measures enforcements. These elements play an important role for water availability and for the spatial, temporal, sectoral allocation of water resource. The groundwater exploitation in the last century has obviously had an impact on the environment, but the changes are not necessarily catastrophic.The current groundwater use in central Syria increases the uncertainty by reducing the ability of aquifers to buffer climatic changes. However, the climatic factor is not the only source of uncertainty. The high volatility of commodity prices, fuel, land and water, depending on the market but also on the will (and capacity) of the Syrian State to preserve social peace is a strong source of uncertainty. The research should consider the whole range of possibilities and propose alternatives that take into consideration the risks they imply for the water users, the political will to support or not the local access to water - thus involving a redefinition of the economic and social objectives - and finally the ability of international organizations to reconsider pre-established diagnoses.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Tobacco consumption is a global epidemic responsible for a vast burden of disease. With pharmacological properties sought-after by consumers and responsible for addiction issues, nicotine is the main reason of this phenomenon. Accordingly, smokeless tobacco products are of growing popularity in sport owing to potential performance enhancing properties and absence of adverse effects on the respiratory system. Nevertheless, nicotine does not appear on the 2011 World Anti-Doping Agency (WADA) Prohibited List or Monitoring Program by lack of a comprehensive large-scale prevalence survey. Thus, this work describes a one-year monitoring study on urine specimens from professional athletes of different disciplines covering 2010 and 2011. A method for the detection and quantification of nicotine, its major metabolites (cotinine, trans-3-hydroxycotinine, nicotine-N′-oxide and cotinine-N-oxide) and minor tobacco alkaloids (anabasine, anatabine and nornicotine) was developed, relying on ultra-high pressure liquid chromatography coupled to triple quadrupole mass spectrometry (UHPLC-TQ-MS/MS). A simple and fast dilute-and-shoot sample treatment was performed, followed by hydrophilic interaction chromatography-tandem mass spectrometry (HILIC-MS/MS) operated in positive electrospray ionization (ESI) mode with multiple reaction monitoring (MRM) data acquisition. After method validation, assessing the prevalence of nicotine consumption in sport involved analysis of 2185 urine samples, accounting for 43 different sports. Concentrations distribution of major nicotine metabolites, minor nicotine metabolites and tobacco alkaloids ranged from 10 (LLOQ) to 32,223, 6670 and 538 ng/mL, respectively. Compounds of interest were detected in trace levels in 23.0% of urine specimens, with concentration levels corresponding to an exposure within the last three days for 18.3% of samples. Likewise, hypothesizing conservative concentration limits for active nicotine consumption prior and/or during sport practice (50 ng/mL for nicotine, cotinine and trans-3-hydroxycotinine and 25 ng/mL for nicotine-N′-oxide, cotinine-N-oxide, anabasine, anatabine and nornicotine) revealed a prevalence of 15.3% amongst athletes. While this number may appear lower than the worldwide smoking prevalence of around 25%, focusing the study on selected sports highlighted more alarming findings. Indeed, active nicotine consumption in ice hockey, skiing, biathlon, bobsleigh, skating, football, basketball, volleyball, rugby, American football, wrestling and gymnastics was found to range between 19.0 and 55.6%. Therefore, considering the adverse effects of smoking on the respiratory tract and numerous health threats detrimental to sport practice at top level, likelihood of smokeless tobacco consumption for performance enhancement is greatly supported.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Applications of genetic constructs with multiple promoters, which are fused with reporter genes and simultaneous monitoring of various events in cells, have gained special attention in recent years. Lentiviral vectors, with their distinctive characteristics, have been considered to monitor the developmental changes of cells in vitro. In this study, we constructed a novel lentiviral vector (FUM-M), containing two germ cell-specific promoters (Stra8 and c-kit), fused with ZsGreen and DsRed2 reporter genes, and evaluated its efficiency in different cells following treatments with retinoic acid and DMSO. Several cell lines (P19, GC-1 spg and HEK293T) were transduced with this vector, and functional capabilities of the promoters were verified by flow cytometry and quantitative RT-PCR. Our results indicate that FUM-M shows dynamic behavior in the presence and absence of extrinsic factors. A correlation was also observed between the function of promoters, present in the lentiviral construct and the endogenous level of the Stra8 and c-kit mRNAs in the cells. In conclusion, we recommend this strategy, which needs further optimization of the constructs, as a beneficial and practical way to screen chemical inducers involved in cellular differentiation toward germ-like cells.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Carbon isotope ratio of androgens in urine specimens is routinely determined to exclude an abuse of testosterone or testosterone prohormones by athletes. Increasing application of gas chromatography/combustion/isotope ratio mass spectrometry (GC/C/IRMS) in the last years for target and systematic investigations on samples has resulted in the demand for rapid sample throughput as well as high selectivity in the extraction process particularly in the case of conspicuous samples. For that purpose, we present herein the complimentary use of an SPE-based assay and an HPLC fractionation method as a two-stage strategy for the isolation of testosterone metabolites and endogenous reference compounds prior to GC/C/IRMS analyses. Assays validation demonstrated acceptable performance in terms of intermediate precision (range: 0.1-0.4 per thousand) and Bland-Altman analyses revealed no significant bias (0.2 per thousand). For further validation of this two-stage analyses strategy, all the specimens (n=124) collected during a major sport event were processed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

RATIONALE: The aim of the work was to develop and validate a method for the quantification of vitamin D metabolites in serum using ultra-high-pressure liquid chromatography coupled to mass spectrometry (LC/MS), and to validate a high-resolution mass spectrometry (LC/HRMS) approach against a tandem mass spectrometry (LC/MS/MS) approach using a large clinical sample set. METHODS: A fast, accurate and reliable method for the quantification of the vitamin D metabolites, 25-hydroxyvitamin D2 (25OH-D2) and 25-hydroxyvitamin D3 (25OH-D3), in human serum was developed and validated. The C3 epimer of 25OH-D3 (3-epi-25OH-D3) was also separated from 25OH-D3. The samples were rapidly prepared via a protein precipitation step followed by solid-phase extraction (SPE) using an HLB μelution plate. Quantification was performed using both LC/MS/MS and LC/HRMS systems. RESULTS: Recovery, matrix effect, inter- and intra-day reproducibility were assessed. Lower limits of quantification (LLOQs) were determined for both 25OH-D2 and 25OH-D3 for the LC/MS/MS approach (6.2 and 3.4 µg/L, respectively) and the LC/HRMS approach (2.1 and 1.7 µg/L, respectively). A Passing & Bablok fit was determined between both approaches for 25OH-D3 on 662 clinical samples (1.11 + 1.06x). It was also shown that results can be affected by the inclusion of the isomer 3-epi-25OH-D3. CONCLUSIONS: Quantification of the relevant vitamin D metabolites was successfully developed and validated here. It was shown that LC/HRMS is an accurate, powerful and easy to use approach for quantification within clinical laboratories. Finally, the results here suggest that it is important to separate 3-epi-25OH-D3 from 25OH-D3. Copyright © 2012 John Wiley & Sons, Ltd.