29 resultados para Invariant, partially invariant and conditionally invariant solutions

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

U-Pb dating of zircons by laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) is a widely used analytical technique in Earth Sciences. For U-Pb ages below 1 billion years (1 Ga), Pb-206/U-238 dates are usually used, showing the least bias by external parameters such as the presence of initial lead and its isotopic composition in the analysed mineral. Precision and accuracy of the Pb/U ratio are thus of highest importance in LA-ICPMS geochronology. We consider the evaluation of the statistical distribution of the sweep intensities based on goodness-of-fit tests in order to find a model probability distribution fitting the data to apply an appropriate formulation for the standard deviation. We then discuss three main methods to calculate the Pb/U intensity ratio and its uncertainty in the LA-ICPMS: (1) ratio-of-the-mean intensities method, (2) mean-of-the-intensity-ratios method and (3) intercept method. These methods apply different functions to the same raw intensity vs. time data to calculate the mean Pb/U intensity ratio. Thus, the calculated intensity ratio and its uncertainty depend on the method applied. We demonstrate that the accuracy and, conditionally, the precision of the ratio-of-the-mean intensities method are invariant to the intensity fluctuations and averaging related to the dwell time selection and off-line data transformation (averaging of several sweeps); we present a statistical approach how to calculate the uncertainty of this method for transient signals. We also show that the accuracy of methods (2) and (3) is influenced by the intensity fluctuations and averaging, and the extent of this influence can amount to tens of percentage points; we show that the uncertainty of these methods also depends on how the signal is averaged. Each of the above methods imposes requirements to the instrumentation. The ratio-of-the-mean intensities method is sufficiently accurate provided the laser induced fractionation between the beginning and the end of the signal is kept low and linear. We show, based on a comprehensive series of analyses with different ablation pit sizes, energy densities and repetition rates for a 193 nm ns-ablation system that such a fractionation behaviour requires using a low ablation speed (low energy density and low repetition rate). Overall, we conclude that the ratio-of-the-mean intensities method combined with low sampling rates is the most mathematically accurate among the existing data treatment methods for U-Pb zircon dating by sensitive sector field ICPMS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An EGFP construct interacting with the PIB1000-PEG6000-PIB1000 vesicles surface reported a ~2-fold fluorescence emission enhancement. Because of the constructs nature with the amphiphilic peptide inserted into the PIB core, EGFP is expected to experience a "pure" PEG environment. To unravel this phenomenon PEG/water solutions at different molecular weights and concentrations were used. Already at ~1 : 10 protein/PEG molar ratio the increase in fluorescence emission is observed reaching a plateau correlating with the PEG molecular weight. Parallel experiments in presence of glycerol aqueous solutions did show a slight fluorescence enhancement however starting at much higher concentrations. Molecular dynamics simulations of EGFP in neat water, glycerol, and PEG aqueous solutions were performed showing that PEG molecules tend to "wrap" the protein creating a microenvironment where the local PEG concentration is higher compared to its bulk concentration. Because the fluorescent emission can be perturbed by the refractive index surrounding the protein, the clustering of PEG molecules induces an enhanced fluorescence emission already at extremely low concentrations. These findings can be important when related to the use of EGFP as reported in molecular biology experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Retroviral transfer of T cell antigen receptor (TCR) genes selected by circumventing tolerance to broad tumor- and leukemia-associated antigens in human leukocyte antigen (HLA)-A*0201 (A2.1) transgenic (Tg) mice allows the therapeutic reprogramming of human T lymphocytes. Using a human CD8 x A2.1/Kb mouse derived TCR specific for natural peptide-A2.1 (pA2.1) complexes comprising residues 81-88 of the human homolog of the murine double-minute 2 oncoprotein, MDM2(81-88), we found that the heterodimeric CD8 alpha beta coreceptor, but not normally expressed homodimeric CD8 alpha alpha, is required for tetramer binding and functional redirection of TCR- transduced human T cells. CD8+T cells that received a humanized derivative of the MDM2 TCR bound pA2.1 tetramers only in the presence of an anti-human-CD8 anti-body and required more peptide than wild-type (WT) MDM2 TCR+T cells to mount equivalent cytotoxicity. They were, however, sufficiently effective in recognizing malignant targets including fresh leukemia cells. Most efficient expression of transduced TCR in human T lymphocytes was governed by mouse as compared to human constant (C) alphabeta domains, as demonstrated with partially humanized and murinized TCR of primary mouse and human origin, respectively. We further observed a reciprocal relationship between the level of Tg WT mouse relative to natural human TCR expression, resulting in T cells with decreased normal human cell surface TCR. In contrast, natural human TCR display remained unaffected after delivery of the humanized MDM2 TCR. These results provide important insights into the molecular basis of TCR gene therapy of malignant disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation focuses on the practice of regulatory governance, throughout the study of the functioning of formally independent regulatory agencies (IRAs), with special attention to their de facto independence. The research goals are grounded on a "neo-positivist" (or "reconstructed positivist") position (Hawkesworth 1992; Radaelli 2000b; Sabatier 2000). This perspective starts from the ontological assumption that even if subjective perceptions are constitutive elements of political phenomena, a real world exists beyond any social construction and can, however imperfectly, become the object of scientific inquiry. Epistemologically, it follows that hypothetical-deductive theories with explanatory aims can be tested by employing a proper methodology and set of analytical techniques. It is thus possible to make scientific inferences and general conclusions to a certain extent, according to a Bayesian conception of knowledge, in order to update the prior scientific beliefs in the truth of the related hypotheses (Howson 1998), while acknowledging the fact that the conditions of truth are at least partially subjective and historically determined (Foucault 1988; Kuhn 1970). At the same time, a sceptical position is adopted towards the supposed disjunction between facts and values and the possibility of discovering abstract universal laws in social science. It has been observed that the current version of capitalism corresponds to the golden age of regulation, and that since the 1980s no government activity in OECD countries has grown faster than regulatory functions (Jacobs 1999). Following an apparent paradox, the ongoing dynamics of liberalisation, privatisation, decartelisation, internationalisation, and regional integration hardly led to the crumbling of the state, but instead promoted a wave of regulatory growth in the face of new risks and new opportunities (Vogel 1996). Accordingly, a new order of regulatory capitalism is rising, implying a new division of labour between state and society and entailing the expansion and intensification of regulation (Levi-Faur 2005). The previous order, relying on public ownership and public intervention and/or on sectoral self-regulation by private actors, is being replaced by a more formalised, expert-based, open, and independently regulated model of governance. Independent regulation agencies (IRAs), that is, formally independent administrative agencies with regulatory powers that benefit from public authority delegated from political decision makers, represent the main institutional feature of regulatory governance (Gilardi 2008). IRAs constitute a relatively new technology of regulation in western Europe, at least for certain domains, but they are increasingly widespread across countries and sectors. For instance, independent regulators have been set up for regulating very diverse issues, such as general competition, banking and finance, telecommunications, civil aviation, railway services, food safety, the pharmaceutical industry, electricity, environmental protection, and personal data privacy. Two attributes of IRAs deserve a special mention. On the one hand, they are formally separated from democratic institutions and elected politicians, thus raising normative and empirical concerns about their accountability and legitimacy. On the other hand, some hard questions about their role as political actors are still unaddressed, though, together with regulatory competencies, IRAs often accumulate executive, (quasi-)legislative, and adjudicatory functions, as well as about their performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When facing age-related cerebral decline, older adults are unequally affected by cognitive impairment without us knowing why. To explore underlying mechanisms and find possible solutions to maintain life-space mobility, there is a need for a standardized behavioral test that relates to behaviors in natural environments. The aim of the project described in this paper was therefore to provide a free, reliable, transparent, computer-based instrument capable of detecting age-related changes on visual processing and cortical functions for the purposes of research into human behavior in computational transportation science. After obtaining content validity, exploring psychometric properties of the developed tasks, we derived (Study 1) the scoring method for measuring cerebral decline on 106 older drivers aged ≥70 years attending a driving refresher course organized by the Swiss Automobile Association to test the instrument's validity against on-road driving performance (106 older drivers). We then validated the derived method on a new sample of 182 drivers (Study 2). We then measured the instrument's reliability having 17 healthy, young volunteers repeat all tests included in the instrument five times (Study 3) and explored the instrument's psychophysical underlying functions on 47 older drivers (Study 4). Finally, we tested the instrument's responsiveness to alcohol and effects on performance on a driving simulator in a randomized, double-blinded, placebo, crossover, dose-response, validation trial including 20 healthy, young volunteers (Study 5). The developed instrument revealed good psychometric properties related to processing speed. It was reliable (ICC = 0.853) and showed reasonable association to driving performance (R (2) = 0.053), and responded to blood alcohol concentrations of 0.5 g/L (p = 0.008). Our results suggest that MedDrive is capable of detecting age-related changes that affect processing speed. These changes nevertheless do not necessarily affect driving behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The amiloride-sensitive epithelial Na channel (ENaC) is a heteromultimeric channel made of three alpha beta gamma subunits. The structures involved in the ion permeation pathway have only been partially identified, and the respective contributions of each subunit in the formation of the conduction pore has not yet been established. Using a site-directed mutagenesis approach, we have identified in a short segment preceding the second membrane-spanning domain (the pre-M2 segment) amino acid residues involved in ion permeation and critical for channel block by amiloride. Cys substitutions of Gly residues in beta and gamma subunits at position beta G525 and gamma G537 increased the apparent inhibitory constant (Ki) for amiloride by > 1,000-fold and decreased channel unitary current without affecting ion selectivity. The corresponding mutation S583 to C in the alpha subunit increased amiloride Ki by 20-fold, without changing channel conducting properties. Coexpression of these mutated alpha beta gamma subunits resulted in a non-conducting channel expressed at the cell surface. Finally, these Cys substitutions increased channel affinity for block by external Zn2+ ions, in particular the alpha S583C mutant showing a Ki for Zn2+ of 29 microM. Mutations of residues alpha W582L, or beta G522D also increased amiloride Ki, the later mutation generating a Ca2+ blocking site located 15% within the membrane electric field. These experiments provide strong evidence that alpha beta gamma ENaCs are pore-forming subunits involved in ion permeation through the channel. The pre-M2 segment of alpha beta gamma subunits may form a pore loop structure at the extracellular face of the channel, where amiloride binds within the channel lumen. We propose that amiloride interacts with Na+ ions at an external Na+ binding site preventing ion permeation through the channel pore.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In groundwater applications, Monte Carlo methods are employed to model the uncertainty on geological parameters. However, their brute-force application becomes computationally prohibitive for highly detailed geological descriptions, complex physical processes, and a large number of realizations. The Distance Kernel Method (DKM) overcomes this issue by clustering the realizations in a multidimensional space based on the flow responses obtained by means of an approximate (computationally cheaper) model; then, the uncertainty is estimated from the exact responses that are computed only for one representative realization per cluster (the medoid). Usually, DKM is employed to decrease the size of the sample of realizations that are considered to estimate the uncertainty. We propose to use the information from the approximate responses for uncertainty quantification. The subset of exact solutions provided by DKM is then employed to construct an error model and correct the potential bias of the approximate model. Two error models are devised that both employ the difference between approximate and exact medoid solutions, but differ in the way medoid errors are interpolated to correct the whole set of realizations. The Local Error Model rests upon the clustering defined by DKM and can be seen as a natural way to account for intra-cluster variability; the Global Error Model employs a linear interpolation of all medoid errors regardless of the cluster to which the single realization belongs. These error models are evaluated for an idealized pollution problem in which the uncertainty of the breakthrough curve needs to be estimated. For this numerical test case, we demonstrate that the error models improve the uncertainty quantification provided by the DKM algorithm and are effective in correcting the bias of the estimate computed solely from the MsFV results. The framework presented here is not specific to the methods considered and can be applied to other combinations of approximate models and techniques to select a subset of realizations

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The class of Schoenberg transformations, embedding Euclidean distances into higher dimensional Euclidean spaces, is presented, and derived from theorems on positive definite and conditionally negative definite matrices. Original results on the arc lengths, angles and curvature of the transformations are proposed, and visualized on artificial data sets by classical multidimensional scaling. A distance-based discriminant algorithm and a robust multidimensional centroid estimate illustrate the theory, closely connected to the Gaussian kernels of Machine Learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is increasing evidence to suggest that the presence of mesoscopic heterogeneities constitutes an important seismic attenuation mechanism in porous rocks. As a consequence, centimetre-scale perturbations of the rock physical properties should be taken into account for seismic modelling whenever detailed and accurate responses of specific target structures are desired, which is, however, computationally prohibitive. A convenient way to circumvent this problem is to use an upscaling procedure to replace each of the heterogeneous porous media composing the geological model by corresponding equivalent visco-elastic solids and to solve the visco-elastic equations of motion for the inferred equivalent model. While the overall qualitative validity of this procedure is well established, there are as of yet no quantitative analyses regarding the equivalence of the seismograms resulting from the original poro-elastic and the corresponding upscaled visco-elastic models. To address this issue, we compare poro-elastic and visco-elastic solutions for a range of marine-type models of increasing complexity. We found that despite the identical dispersion and attenuation behaviour of the heterogeneous poro-elastic and the equivalent visco-elastic media, the seismograms may differ substantially due to diverging boundary conditions, where there exist additional options for the poro-elastic case. In particular, we observe that at the fluid/porous-solid interface, the poro- and visco-elastic seismograms agree for closed-pore boundary conditions, but differ significantly for open-pore boundary conditions. This is an important result which has potentially far-reaching implications for wave-equation-based algorithms in exploration geophysics involving fluid/porous-solid interfaces, such as, for example, wavefield decomposition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have constructed a forward modelling code in Matlab, capable of handling several commonly used electrical and electromagnetic methods in a 1D environment. We review the implemented electromagnetic field equations for grounded wires, frequency and transient soundings and present new solutions in the case of a non-magnetic first layer. The CR1Dmod code evaluates the Hankel transforms occurring in the field equations using either the Fast Hankel Transform based on digital filter theory, or a numerical integration scheme applied between the zeros of the Bessel function. A graphical user interface allows easy construction of 1D models and control of the parameters. Modelling results are in agreement with other authors, but the time of computation is less efficient than other available codes. Nevertheless, the CR1Dmod routine handles complex resistivities and offers solutions based on the full EM-equations as well as the quasi-static approximation. Thus, modelling of effects based on changes in the magnetic permeability and the permittivity is also possible.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent trend for journals to require open access to primary data included in publications has been embraced by many biologists, but has caused apprehension amongst researchers engaged in long-term ecological and evolutionary studies. A worldwide survey of 73 principal investigators (Pls) with long-term studies revealed positive attitudes towards sharing data with the agreement or involvement of the PI, and 93% of PIs have historically shared data. Only 8% were in favor of uncontrolled, open access to primary data while 63% expressed serious concern. We present here their viewpoint on an issue that can have non-trivial scientific consequences. We discuss potential costs of public data archiving and provide possible solutions to meet the needs of journals and researchers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Integrating single nucleotide polymorphism (SNP) p-values from genome-wide association studies (GWAS) across genes and pathways is a strategy to improve statistical power and gain biological insight. Here, we present Pascal (Pathway scoring algorithm), a powerful tool for computing gene and pathway scores from SNP-phenotype association summary statistics. For gene score computation, we implemented analytic and efficient numerical solutions to calculate test statistics. We examined in particular the sum and the maximum of chi-squared statistics, which measure the strongest and the average association signals per gene, respectively. For pathway scoring, we use a modified Fisher method, which offers not only significant power improvement over more traditional enrichment strategies, but also eliminates the problem of arbitrary threshold selection inherent in any binary membership based pathway enrichment approach. We demonstrate the marked increase in power by analyzing summary statistics from dozens of large meta-studies for various traits. Our extensive testing indicates that our method not only excels in rigorous type I error control, but also results in more biologically meaningful discoveries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When emerging from the ribosomes, new polypeptides need to fold properly, eventually translocate, and then assemble into stable, yet functionally flexible complexes. During their lifetime, native proteins are often exposed to stresses that can partially unfold and convert them into stably misfolded and aggregated species, which can in turn cause cellular damage and propagate to other cells. In animal cells, especially in aged neurons, toxic aggregates may accumulate, induce cell death and lead to tissue degeneration via different mechanisms, such as apoptosis as in Parkinson's and Alzheimer's diseases and aging in general. The main cellular mechanisms effectively controlling protein homeostasis in youth and healthy adulthood are: (1) the molecular chaperones, acting as aggregate unfolding and refolding enzymes, (2) the chaperone-gated proteases, acting as aggregate unfolding and degrading enzymes, (3) the aggresomes, acting as aggregate compacting machineries, and (4) the autophagosomes, acting as aggregate degrading organelles. For unclear reasons, these cellular defences become gradually incapacitated with age, leading to the onset of degenerative diseases. Understanding these mechanisms and the reasons for their incapacitation in late adulthood is key to the design of new therapies against the progression of aging, degenerative diseases and cancers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background/Purpose: The trabecular bone score (TBS), a novel graylevel texture index determined from lumbar spine DXA scans, correlates with 3D parameters of trabecular bone microarchitecture known to predict fracture. TBS may enhance the identification of patients at increased risk for vertebral fracture independently of bone mineral density (BMD) (Boutroy JBMR 2010; Hans JBMR 2011). Denosumab treatment for 36 months decreased bone turnover, increased BMD, and reduced new vertebral fractures in postmenopausal women with osteoporosis (Cummings NEJM 2009). We explored the effect of denosumab on TBS over 36 months and evaluated the association between TBS and lumbar spine BMD in women who had DXA scans obtained from eligible scanners for TBS evaluation in FREEDOM. Methods: FREEDOM was a 3-year, randomized, double-blind trial that enrolled postmenopausal women with a lumbar spine or total hip DXA T-score __2.5, but not __4.0 at both sites. Women received placebo or 60 mg denosumab every 6 months. A subset of women in FREEDOM participated in a DXA substudy where lumbar spine DXA scans were obtained at baseline and months 1, 6, 12, 24, and 36. We retrospectively applied, in a blinded-to-treatment manner, a novel software program (TBS iNsightR v1.9, Med-Imaps, Pessac, France) to the standard lumbar spine DXA scans obtained in these women to determine their TBS indices at baseline and months 12, 24, and 36. From previous studies, a TBS _1.35 is considered as normal microarchitecture, a TBS between 1.35 and _1.20 as partially deteriorated, and 1.20 reflects degraded microarchitecture. Results: There were 285 women (128 placebo, 157 denosumab) with a TBS value at baseline and _1 post-baseline visit. Their mean age was 73, their mean lumbar spine BMD T-score was _2.79, and their mean lumbar spine TBS was 1.20. In addition to the robust gains in DXA lumbar spine BMD observed with denosumab (9.8% at month 36), there were consistent, progressive, and significant increases in TBS compared with placebo and baseline (Table & Figure). BMD explained a very small fraction of the variance in TBS at baseline (r2_0.07). In addition, the variance in the TBS change was largely unrelated to BMD change, whether expressed in absolute or percentage changes, regardless of treatment, throughout the study (all r2_0.06); indicating that TBS provides distinct information, independently of BMD. Conclusion: In postmenopausal women with osteoporosis, denosumab significantly improved TBS, an index of lumbar spine trabecular microarchitecture, independently of BMD.