168 resultados para Inverse method
Resumo:
We present a novel numerical approach for the comprehensive, flexible, and accurate simulation of poro-elastic wave propagation in cylindrical coordinates. An important application of this method is the modeling of complex seismic wave phenomena in fluid-filled boreholes, which represents a major, and as of yet largely unresolved, computational problem in exploration geophysics. In view of this, we consider a numerical mesh consisting of three concentric domains representing the borehole fluid in the center, the borehole casing and the surrounding porous formation. The spatial discretization is based on a Chebyshev expansion in the radial direction, Fourier expansions in the other directions, and a Runge-Kutta integration scheme for the time evolution. A domain decomposition method based on the method of characteristics is used to match the boundary conditions at the fluid/porous-solid and porous-solid/porous-solid interfaces. The viability and accuracy of the proposed method has been tested and verified in 2D polar coordinates through comparisons with analytical solutions as well as with the results obtained with a corresponding, previously published, and independently benchmarked solution for 2D Cartesian coordinates. The proposed numerical solution also satisfies the reciprocity theorem, which indicates that the inherent singularity associated with the origin of the polar coordinate system is handled adequately.
Resumo:
The purpose of this study was to evaluate a new method of measuring rolling resistance in treadmill cycling and to establish its sensitivity and reproducibility. One participant was asked to keep a bicycle in equilibrium on a treadmill without pedalling at a constant speed of 5.56 m x s(-1), which was held in place in the front by a dynamometer. For each condition, the method consisted of 11 measurements of the force required to hold the cycle at different treadmill slopes (0-10%, increment 1%). The coefficient of rolling resistance was calculated based on the forces applied to the bicycle in equilibrium. To test the sensitivity of the method, the bicycle was successively equipped with three tyre types (700 x 28, 700 x 23, 700 x 22) and inflation pressure was set at 150, 300, 600, 900, and 1100 kPa. To test the reproducibility of the method, a second experimenter repeated all measurements done with the 700 x 23 tyres. The method was sensitive enough to detect an effect of both tyre type and inflation pressure (P < 0.001: two-way ANOVA). The measurement of the coefficient of rolling resistance by two separate experimenters resulted in a small bias of 0.00029 (95% CI, -0.00011 to 0.00068). In conclusion, the new method is sensitive and reliable, as well as being simple and affordable.
A filtering method to correct time-lapse 3D ERT data and improve imaging of natural aquifer dynamics
Resumo:
We have developed a processing methodology that allows crosshole ERT (electrical resistivity tomography) monitoring data to be used to derive temporal fluctuations of groundwater electrical resistivity and thereby characterize the dynamics of groundwater in a gravel aquifer as it is infiltrated by river water. Temporal variations of the raw ERT apparent-resistivity data were mainly sensitive to the resistivity (salinity), temperature and height of the groundwater, with the relative contributions of these effects depending on the time and the electrode configuration. To resolve the changes in groundwater resistivity, we first expressed fluctuations of temperature-detrended apparent-resistivity data as linear superpositions of (i) time series of riverwater-resistivity variations convolved with suitable filter functions and (ii) linear and quadratic representations of river-water-height variations multiplied by appropriate sensitivity factors; river-water height was determined to be a reliable proxy for groundwater height. Individual filter functions and sensitivity factors were obtained for each electrode configuration via deconvolution using a one month calibration period and then the predicted contributions related to changes in water height were removed prior to inversion of the temperature-detrended apparent-resistivity data. Applications of the filter functions and sensitivity factors accurately predicted the apparent-resistivity variations (the correlation coefficient was 0.98). Furthermore, the filtered ERT monitoring data and resultant time-lapse resistivity models correlated closely with independently measured groundwater electrical resistivity monitoring data and only weakly with the groundwater-height fluctuations. The inversion results based on the filtered ERT data also showed significantly less inversion artefacts than the raw data inversions. We observed resistivity increases of up to 10% and the arrival time peaks in the time-lapse resistivity models matched those in the groundwater resistivity monitoring data.
Resumo:
Eukaryotic transcription is tightly regulated by transcriptional regulatory elements, even though these elements may be located far away from their target genes. It is now widely recognized that these regulatory elements can be brought in close proximity through the formation of chromatin loops, and that these loops are crucial for transcriptional regulation of their target genes. The chromosome conformation capture (3C) technique presents a snapshot of long-range interactions, by fixing physically interacting elements with formaldehyde, digestion of the DNA, and ligation to obtain a library of unique ligation products. Recently, several large-scale modifications to the 3C technique have been presented. Here, we describe chromosome conformation capture sequencing (4C-seq), a high-throughput version of the 3C technique that combines the 3C-on-chip (4C) protocol with next-generation Illumina sequencing. The method is presented for use in mammalian cell lines, but can be adapted to use in mammalian tissues and any other eukaryotic genome.
Resumo:
Achieving a high degree of dependability in complex macro-systems is challenging. Because of the large number of components and numerous independent teams involved, an overview of the global system performance is usually lacking to support both design and operation adequately. A functional failure mode, effects and criticality analysis (FMECA) approach is proposed to address the dependability optimisation of large and complex systems. The basic inductive model FMECA has been enriched to include considerations such as operational procedures, alarm systems. environmental and human factors, as well as operation in degraded mode. Its implementation on a commercial software tool allows an active linking between the functional layers of the system and facilitates data processing and retrieval, which enables to contribute actively to the system optimisation. The proposed methodology has been applied to optimise dependability in a railway signalling system. Signalling systems are typical example of large complex systems made of multiple hierarchical layers. The proposed approach appears appropriate to assess the global risk- and availability-level of the system as well as to identify its vulnerabilities. This enriched-FMECA approach enables to overcome some of the limitations and pitfalls previously reported with classical FMECA approaches.
Resumo:
Explicitly correlated coupled-cluster calculations of intermolecular interaction energies for the S22 benchmark set of Jurecka, Sponer, Cerny, and Hobza (Chem. Phys. Phys. Chem. 2006, 8, 1985) are presented. Results obtained with the recently proposed CCSD(T)-F12a method and augmented double-zeta basis sets are found to be in very close agreement with basis set extrapolated conventional CCSD(T) results. Furthermore, we propose a dispersion-weighted MP2 (DW-MP2) approximation that combines the good accuracy of MP2 for complexes with predominately electrostatic bonding and SCS-MP2 for dispersion-dominated ones. The MP2-F12 and SCS-MP2-F12 correlation energies are weighted by a switching function that depends on the relative HF and correlation contributions to the interaction energy. For the S22 set, this yields a mean absolute deviation of 0.2 kcal/mol from the CCSD(T)-F12a results. The method, which allows obtaining accurate results at low cost, is also tested for a number of dimers that are not in the training set.
Resumo:
Gene correction at the site of the mutation in the chromosome is the absolute way to really cure a genetic disease. The oligonucleotide (ODN)-mediated gene repair technology uses an ODN perfectly complementary to the genomic sequence except for a mismatch at the base that is mutated. The endogenous repair machinery of the targeted cell then mediates substitution of the desired base in the gene, resulting in a completely normal sequence. Theoretically, it avoids potential gene silencing or random integration associated with common viral gene augmentation approaches and allows an intact regulation of expression of the therapeutic protein. The eye is a particularly attractive target for gene repair because of its unique features (small organ, easily accessible, low diffusion into systemic circulation). Moreover therapeutic effects on visual impairment could be obtained with modest levels of repair. This chapter describes in details the optimized method to target active ODNs to the nuclei of photoreceptors in neonatal mouse using (1) an electric current application at the eye surface (saline transpalpebral iontophoresis), (2) combined with an intravitreous injection of ODNs, as well as the experimental methods for (3) the dissection of adult neural retinas, (4) their immuno-labelling, and (5) flat-mounting for direct observation of photoreceptor survival, a relevant criteria of treatment outcomes for retinal degeneration.
Resumo:
Recent findings suggest an association between exposure to cleaning products and respiratory dysfunctions including asthma. However, little information is available about quantitative airborne exposures of professional cleaners to volatile organic compounds deriving from cleaning products. During the first phases of the study, a systematic review of cleaning products was performed. Safety data sheets were reviewed to assess the most frequently added volatile organic compounds. It was found that professional cleaning products are complex mixtures of different components (compounds in cleaning products: 3.5 ± 2.8), and more than 130 chemical substances listed in the safety data sheets were identified in 105 products. The main groups of chemicals were fragrances, glycol ethers, surfactants, solvents; and to a lesser extent phosphates, salts, detergents, pH-stabilizers, acids, and bases. Up to 75% of products contained irritant (Xi), 64% harmful (Xn) and 28% corrosive (C) labeled substances. Hazards for eyes (59%), skin (50%) and by ingestion (60%) were the most reported. Monoethanolamine, a strong irritant and known to be involved in sensitizing mechanisms as well as allergic reactions, is frequently added to cleaning products. Monoethanolamine determination in air has traditionally been difficult and air sampling and analysis methods available were little adapted for personal occupational air concentration assessments. A convenient method was developed with air sampling on impregnated glass fiber filters followed by one step desorption, gas chromatography and nitrogen phosphorous selective detection. An exposure assessment was conducted in the cleaning sector, to determine airborne concentrations of monoethanolamine, glycol ethers, and benzyl alcohol during different cleaning tasks performed by professional cleaning workers in different companies, and to determine background air concentrations of formaldehyde, a known indoor air contaminant. The occupational exposure study was carried out in 12 cleaning companies, and personal air samples were collected for monoethanolamine (n=68), glycol ethers (n=79), benzyl alcohol (n=15) and formaldehyde (n=45). All but ethylene glycol mono-n-butyl ether air concentrations measured were far below (<1/10) of the Swiss eight hours occupational exposure limits, except for butoxypropanol and benzyl alcohol, where no occupational exposure limits were available. Although only detected once, ethylene glycol mono-n-butyl ether air concentrations (n=4) were high (49.5 mg/m3 to 58.7 mg/m3), hovering at the Swiss occupational exposure limit (49 mg/m3). Background air concentrations showed no presence of monoethanolamine, while the glycol ethers were often present, and formaldehyde was universally detected. Exposures were influenced by the amount of monoethanolamine in the cleaning product, cross ventilation and spraying. The collected data was used to test an already existing exposure modeling tool during the last phases of the study. The exposure estimation of the so called Bayesian tool converged with the measured range of exposure the more air concentrations of measured exposure were added. This was best described by an inverse 2nd order equation. The results suggest that the Bayesian tool is not adapted to predict low exposures. The Bayesian tool should be tested also with other datasets describing higher exposures. Low exposures to different chemical sensitizers and irritants should be further investigated to better understand the development of respiratory disorders in cleaning workers. Prevention measures should especially focus on incorrect use of cleaning products, to avoid high air concentrations at the exposure limits. - De récentes études montrent l'existence d'un lien entre l'exposition aux produits de nettoyages et les maladies respiratoires telles que l'asthme. En revanche, encore peu d'informations sont disponibles concernant la quantité d'exposition des professionnels du secteur du nettoyage aux composants organiques volatiles provenant des produits qu'ils utilisent. Pendant la première phase de cette étude, un recueil systématique des produits professionnels utilisés dans le secteur du nettoyage a été effectué. Les fiches de données de sécurité de ces produits ont ensuite été analysées, afin de répertorier les composés organiques volatiles les plus souvent utilisés. Il a été mis en évidence que les produits de nettoyage professionnels sont des mélanges complexes de composants chimiques (composants chimiques dans les produits de nettoyage : 3.5 ± 2.8). Ainsi, plus de 130 substances listées dans les fiches de données de sécurité ont été retrouvées dans les 105 produits répertoriés. Les principales classes de substances chimiques identifiées étaient les parfums, les éthers de glycol, les agents de surface et les solvants; dans une moindre mesure, les phosphates, les sels, les détergents, les régulateurs de pH, les acides et les bases ont été identifiés. Plus de 75% des produits répertoriés contenaient des substances décrites comme irritantes (Xi), 64% nuisibles (Xn) et 28% corrosives (C). Les risques pour les yeux (59%), la peau (50%) et par ingestion (60%) était les plus mentionnés. La monoéthanolamine, un fort irritant connu pour être impliqué dans les mécanismes de sensibilisation tels que les réactions allergiques, est fréquemment ajouté aux produits de nettoyage. L'analyse de la monoéthanolamine dans l'air a été habituellement difficile et les échantillons d'air ainsi que les méthodes d'analyse déjà disponibles étaient peu adaptées à l'évaluation de la concentration individuelle d'air aux postes de travail. Une nouvelle méthode plus efficace a donc été développée en captant les échantillons d'air sur des filtres de fibre de verre imprégnés, suivi par une étape de désorption, puis une Chromatographie des gaz et enfin une détection sélective des composants d'azote. Une évaluation de l'exposition des professionnels a été réalisée dans le secteur du nettoyage afin de déterminer la concentration atmosphérique en monoéthanolamine, en éthers de glycol et en alcool benzylique au cours des différentes tâches de nettoyage effectuées par les professionnels du nettoyage dans différentes entreprises, ainsi que pour déterminer les concentrations atmosphériques de fond en formaldéhyde, un polluant de l'air intérieur bien connu. L'étude de l'exposition professionnelle a été effectuée dans 12 compagnies de nettoyage et les échantillons d'air individuels ont été collectés pour l'éthanolamine (n=68), les éthers de glycol (n=79), l'alcool benzylique (n=15) et le formaldéhyde (n=45). Toutes les substances mesurées dans l'air, excepté le 2-butoxyéthanol, étaient en-dessous (<1/10) de la valeur moyenne d'exposition aux postes de travail en Suisse (8 heures), excepté pour le butoxypropanol et l'alcool benzylique, pour lesquels aucune valeur limite d'exposition n'était disponible. Bien que détecté qu'une seule fois, les concentrations d'air de 2-butoxyéthanol (n=4) étaient élevées (49,5 mg/m3 à 58,7 mg/m3), se situant au-dessus de la frontière des valeurs limites d'exposition aux postes de travail en Suisse (49 mg/m3). Les concentrations d'air de fond n'ont montré aucune présence de monoéthanolamine, alors que les éthers de glycol étaient souvent présents et les formaldéhydes quasiment toujours détectés. L'exposition des professionnels a été influencée par la quantité de monoéthanolamine présente dans les produits de nettoyage utilisés, par la ventilation extérieure et par l'emploie de sprays. Durant la dernière phase de l'étude, les informations collectées ont été utilisées pour tester un outil de modélisation de l'exposition déjà existant, l'outil de Bayesian. L'estimation de l'exposition de cet outil convergeait avec l'exposition mesurée. Cela a été le mieux décrit par une équation du second degré inversée. Les résultats suggèrent que l'outil de Bayesian n'est pas adapté pour mettre en évidence les taux d'expositions faibles. Cet outil devrait également être testé avec d'autres ensembles de données décrivant des taux d'expositions plus élevés. L'exposition répétée à des substances chimiques ayant des propriétés irritatives et sensibilisantes devrait être investiguée d'avantage, afin de mieux comprendre l'apparition de maladies respiratoires chez les professionnels du nettoyage. Des mesures de prévention devraient tout particulièrement être orientées sur l'utilisation correcte des produits de nettoyage, afin d'éviter les concentrations d'air élevées se situant à la valeur limite d'exposition acceptée.
Resumo:
Over the last 10 years, diffusion-weighted imaging (DWI) has become an important tool to investigate white matter (WM) anomalies in schizophrenia. Despite technological improvement and the exponential use of this technique, discrepancies remain and little is known about optimal parameters to apply for diffusion weighting during image acquisition. Specifically, high b-value diffusion-weighted imaging known to be more sensitive to slow diffusion is not widely used, even though subtle myelin alterations as thought to happen in schizophrenia are likely to affect slow-diffusing protons. Schizophrenia patients and healthy controls were scanned with a high b-value (4000s/mm(2)) protocol. Apparent diffusion coefficient (ADC) measures turned out to be very sensitive in detecting differences between schizophrenia patients and healthy volunteers even in a relatively small sample. We speculate that this is related to the sensitivity of high b-value imaging to the slow-diffusing compartment believed to reflect mainly the intra-axonal and myelin bound water pool. We also compared these results to a low b-value imaging experiment performed on the same population in the same scanning session. Even though the acquisition protocols are not strictly comparable, we noticed important differences in sensitivities in the favor of high b-value imaging, warranting further exploration.
Resumo:
Purpose: SIOPEN scoring of 123I mIBG imaging has been shown to predict response to induction chemotherapy and outcome at diagnosis in children with HRN.Method: Patterns of skeletal 123I mIBG uptake were assigned numerical scores (Mscore) ranging from 0 (no metastasis) to 72 (diffuse metastases) within 12 body areas as described previously. 271 anonymised, paired image data sets acquired at diagnosis and on completion of Rapid COJEC induction chemotherapy were reviewed, constituting a representative sample of 1602 children treated prospectively within the HR-NBL1/SIOPEN trial. Pre-and post-treatment Mscores were compared with bone marrow cytology (BM) and 3 year event free survival (EFS).Results: Results 224/271 patients showed skeletal MIBG-uptake at diagnosis and were evaluable forMIBG-response. Complete response (CR) on MIBG to Rapid COJEC induction was achieved by 66%, 34% and 15% of patients who had pre-treatment Mscores of <18 (n¼65, 29%), 18-44 (n¼95,42%) and Y ´ 45 (n¼64, 28.5%) respectively (chi squared test p<.0001). Mscore at diagnosis and on completion of Rapid COJEC correlated strongly with BM involvement (p<0.0001). The correlation of pre score with post scores and response was highly significant (p<0.001). Most importantly, the 3 year EFS in 47 children with Mscore 0 at diagnosis was 0.68 (A ` 0.07), by comparison with 0.42 (A` 0.06), 0.35 (A` 0.05) and 0.25 (A` 0.06) for patients in pre-treatment score groups <18, 18-44 and Y ´ 45, respectively (p<0.001). AnMscore threshold ofY ´ 45 at diagnosis was associated with significantly worse outcome by comparison with all other Mscore groups (p¼0.029). The 3 year EFS of 0.53 (A` 0.07) of patients in metastatic CR (mIBG and BM) after Rapid Cojec (33%) is clearly superior to patients not achieving metastatic CR (0.24 (A ` 0.04), p¼0.005).Conclusion: SIOPEN scoring of 123I mIBG imaging has been shown to predict response to induction chemotherapy and outcome at diagnosis in children with HRN.
Resumo:
With the trend in molecular epidemiology towards both genome-wide association studies and complex modelling, the need for large sample sizes to detect small effects and to allow for the estimation of many parameters within a model continues to increase. Unfortunately, most methods of association analysis have been restricted to either a family-based or a case-control design, resulting in the lack of synthesis of data from multiple studies. Transmission disequilibrium-type methods for detecting linkage disequilibrium from family data were developed as an effective way of preventing the detection of association due to population stratification. Because these methods condition on parental genotype, however, they have precluded the joint analysis of family and case-control data, although methods for case-control data may not protect against population stratification and do not allow for familial correlations. We present here an extension of a family-based association analysis method for continuous traits that will simultaneously test for, and if necessary control for, population stratification. We further extend this method to analyse binary traits (and therefore family and case-control data together) and accurately to estimate genetic effects in the population, even when using an ascertained family sample. Finally, we present the power of this binary extension for both family-only and joint family and case-control data, and demonstrate the accuracy of the association parameter and variance components in an ascertained family sample.
Resumo:
In clinical practice, a classification of seizures based on clinical signs and symptoms leads to an improved understanding of epilepsy-related issues and therefore strongly contributes to a better patient care. The inverse problem involves inferring the anatomical brain localization of a seizure from the scalp surface EEG, a concept we apply here to correlate seizure origin with seizure semiology. The spheres of sensorium, motor features, consciousness changes and autonomic alterations during ictal and postictal manifestations are reviewed, including several subdivisions used to better categorize particular features. Particular attention is given to behavioral features, as well as to features occurring in idiopathic generalized epileptic syndromes and psychogenic nonepileptic spells.
Resumo:
BACKGROUND: Postmenopausal women with hormone receptor-positive early breast cancer have persistent, long-term risk of breast-cancer recurrence and death. Therefore, trials assessing endocrine therapies for this patient population need extended follow-up. We present an update of efficacy outcomes in the Breast International Group (BIG) 1-98 study at 8·1 years median follow-up. METHODS: BIG 1-98 is a randomised, phase 3, double-blind trial of postmenopausal women with hormone receptor-positive early breast cancer that compares 5 years of tamoxifen or letrozole monotherapy, or sequential treatment with 2 years of one of these drugs followed by 3 years of the other. Randomisation was done with permuted blocks, and stratified according to the two-arm or four-arm randomisation option, participating institution, and chemotherapy use. Patients, investigators, data managers, and medical reviewers were masked. The primary efficacy endpoint was disease-free survival (events were invasive breast cancer relapse, second primaries [contralateral breast and non-breast], or death without previous cancer event). Secondary endpoints were overall survival, distant recurrence-free interval (DRFI), and breast cancer-free interval (BCFI). The monotherapy comparison included patients randomly assigned to tamoxifen or letrozole for 5 years. In 2005, after a significant disease-free survival benefit was reported for letrozole as compared with tamoxifen, a protocol amendment facilitated the crossover to letrozole of patients who were still receiving tamoxifen alone; Cox models and Kaplan-Meier estimates with inverse probability of censoring weighting (IPCW) are used to account for selective crossover to letrozole of patients (n=619) in the tamoxifen arm. Comparison of sequential treatments to letrozole monotherapy included patients enrolled and randomly assigned to letrozole for 5 years, letrozole for 2 years followed by tamoxifen for 3 years, or tamoxifen for 2 years followed by letrozole for 3 years. Treatment has ended for all patients and detailed safety results for adverse events that occurred during the 5 years of treatment have been reported elsewhere. Follow-up is continuing for those enrolled in the four-arm option. BIG 1-98 is registered at clinicaltrials.govNCT00004205. FINDINGS: 8010 patients were included in the trial, with a median follow-up of 8·1 years (range 0-12·4). 2459 were randomly assigned to monotherapy with tamoxifen for 5 years and 2463 to monotherapy with letrozole for 5 years. In the four-arm option of the trial, 1546 were randomly assigned to letrozole for 5 years, 1548 to tamoxifen for 5 years, 1540 to letrozole for 2 years followed by tamoxifen for 3 years, and 1548 to tamoxifen for 2 years followed by letrozole for 3 years. At a median follow-up of 8·7 years from randomisation (range 0-12·4), letrozole monotherapy was significantly better than tamoxifen, whether by IPCW or intention-to-treat analysis (IPCW disease-free survival HR 0·82 [95% CI 0·74-0·92], overall survival HR 0·79 [0·69-0·90], DRFI HR 0·79 [0·68-0·92], BCFI HR 0·80 [0·70-0·92]; intention-to-treat disease-free survival HR 0·86 [0·78-0·96], overall survival HR 0·87 [0·77-0·999], DRFI HR 0·86 [0·74-0·998], BCFI HR 0·86 [0·76-0·98]). At a median follow-up of 8·0 years from randomisation (range 0-11·2) for the comparison of the sequential groups with letrozole monotherapy, there were no statistically significant differences in any of the four endpoints for either sequence. 8-year intention-to-treat estimates (each with SE ≤1·1%) for letrozole monotherapy, letrozole followed by tamoxifen, and tamoxifen followed by letrozole were 78·6%, 77·8%, 77·3% for disease-free survival; 87·5%, 87·7%, 85·9% for overall survival; 89·9%, 88·7%, 88·1% for DRFI; and 86·1%, 85·3%, 84·3% for BCFI. INTERPRETATION: For postmenopausal women with endocrine-responsive early breast cancer, a reduction in breast cancer recurrence and mortality is obtained by letrozole monotherapy when compared with tamoxifen montherapy. Sequential treatments involving tamoxifen and letrozole do not improve outcome compared with letrozole monotherapy, but might be useful strategies when considering an individual patient's risk of recurrence and treatment tolerability. FUNDING: Novartis, United States National Cancer Institute, International Breast Cancer Study Group.