927 resultados para electrochemical noise analysis
Resumo:
The proportion of population living in or around cites is more important than ever. Urban sprawl and car dependence have taken over the pedestrian-friendly compact city. Environmental problems like air pollution, land waste or noise, and health problems are the result of this still continuing process. The urban planners have to find solutions to these complex problems, and at the same time insure the economic performance of the city and its surroundings. At the same time, an increasing quantity of socio-economic and environmental data is acquired. In order to get a better understanding of the processes and phenomena taking place in the complex urban environment, these data should be analysed. Numerous methods for modelling and simulating such a system exist and are still under development and can be exploited by the urban geographers for improving our understanding of the urban metabolism. Modern and innovative visualisation techniques help in communicating the results of such models and simulations. This thesis covers several methods for analysis, modelling, simulation and visualisation of problems related to urban geography. The analysis of high dimensional socio-economic data using artificial neural network techniques, especially self-organising maps, is showed using two examples at different scales. The problem of spatiotemporal modelling and data representation is treated and some possible solutions are shown. The simulation of urban dynamics and more specifically the traffic due to commuting to work is illustrated using multi-agent micro-simulation techniques. A section on visualisation methods presents cartograms for transforming the geographic space into a feature space, and the distance circle map, a centre-based map representation particularly useful for urban agglomerations. Some issues on the importance of scale in urban analysis and clustering of urban phenomena are exposed. A new approach on how to define urban areas at different scales is developed, and the link with percolation theory established. Fractal statistics, especially the lacunarity measure, and scale laws are used for characterising urban clusters. In a last section, the population evolution is modelled using a model close to the well-established gravity model. The work covers quite a wide range of methods useful in urban geography. Methods should still be developed further and at the same time find their way into the daily work and decision process of urban planners. La part de personnes vivant dans une région urbaine est plus élevé que jamais et continue à croître. L'étalement urbain et la dépendance automobile ont supplanté la ville compacte adaptée aux piétons. La pollution de l'air, le gaspillage du sol, le bruit, et des problèmes de santé pour les habitants en sont la conséquence. Les urbanistes doivent trouver, ensemble avec toute la société, des solutions à ces problèmes complexes. En même temps, il faut assurer la performance économique de la ville et de sa région. Actuellement, une quantité grandissante de données socio-économiques et environnementales est récoltée. Pour mieux comprendre les processus et phénomènes du système complexe "ville", ces données doivent être traitées et analysées. Des nombreuses méthodes pour modéliser et simuler un tel système existent et sont continuellement en développement. Elles peuvent être exploitées par le géographe urbain pour améliorer sa connaissance du métabolisme urbain. Des techniques modernes et innovatrices de visualisation aident dans la communication des résultats de tels modèles et simulations. Cette thèse décrit plusieurs méthodes permettant d'analyser, de modéliser, de simuler et de visualiser des phénomènes urbains. L'analyse de données socio-économiques à très haute dimension à l'aide de réseaux de neurones artificiels, notamment des cartes auto-organisatrices, est montré à travers deux exemples aux échelles différentes. Le problème de modélisation spatio-temporelle et de représentation des données est discuté et quelques ébauches de solutions esquissées. La simulation de la dynamique urbaine, et plus spécifiquement du trafic automobile engendré par les pendulaires est illustrée à l'aide d'une simulation multi-agents. Une section sur les méthodes de visualisation montre des cartes en anamorphoses permettant de transformer l'espace géographique en espace fonctionnel. Un autre type de carte, les cartes circulaires, est présenté. Ce type de carte est particulièrement utile pour les agglomérations urbaines. Quelques questions liées à l'importance de l'échelle dans l'analyse urbaine sont également discutées. Une nouvelle approche pour définir des clusters urbains à des échelles différentes est développée, et le lien avec la théorie de la percolation est établi. Des statistiques fractales, notamment la lacunarité, sont utilisées pour caractériser ces clusters urbains. L'évolution de la population est modélisée à l'aide d'un modèle proche du modèle gravitaire bien connu. Le travail couvre une large panoplie de méthodes utiles en géographie urbaine. Toutefois, il est toujours nécessaire de développer plus loin ces méthodes et en même temps, elles doivent trouver leur chemin dans la vie quotidienne des urbanistes et planificateurs.
Resumo:
We introduce a class of exactly solvable models exhibiting an ordering noise-induced phase transition in which order arises as a result of a balance between the relaxing deterministic dynamics and the randomizing character of the fluctuations. A finite-size scaling analysis of the phase transition reveals that it belongs to the universality class of the equilibrium Ising model. All these results are analyzed in the light of the nonequilibrium probability distribution of the system, which can be obtained analytically. Our results could constitute a possible scenario of inverted phase diagrams in the so-called lower critical solution temperature transitions.
Resumo:
The Swift-Hohenberg equation is studied in the presence of a multiplicative noise. This stochastic equation could describe a situation in which a noise has been superimposed on the temperature gradient between the two plates of a Rayleigh-Bnard cell. A linear stability analysis and numerical simulations show that, in constrast to the additive-noise case, convective structures appear in a regime in which a deterministic analysis predicts a homogeneous solution.
Resumo:
We present numerical evidence and a theoretical analysis of the appearance of anticoherence resonance induced by noise, not predicted in former analysis of coherence resonance. We have found that this phenomenon occurs for very small values of the intensity of the noise acting on an excitable system, and we claim that this is a universal signature of a nonmonotonous relaxational behavior near its oscillatory regime. Moreover, we demonstrate that this new phenomenon is totally compatible with the standard situation of coherence resonance appearing at intermediate values of noise intensity.
Resumo:
A simple model is introduced that exhibits a noise-induced front propagation and where the noise enters multiplicatively. The invasion of the unstable state is studied, both theoretically and numerically. A good agreement is obtained for the mean value of the order parameter and the mean front velocity using the analytical predictions of the linear marginal stability analysis.
Resumo:
We study the dynamics of reaction-diffusion fronts under the influence of multiplicative noise. An approximate theoretical scheme is introduced to compute the velocity of the front and its diffusive wandering due to the presence of noise. The theoretical approach is based on a multiple scale analysis rather than on a small noise expansion and is confirmed with numerical simulations for a wide range of the noise intensity. We report on the possibility of noise sustained solutions with a continuum of possible velocities, in situations where only a single velocity is allowed without noise.
Resumo:
The development of a whole-cell based sensor for arsenite detection coupling biological engineering and electrochemical techniques is presented. This strategy takes advantage of the natural Escherichia coli resistance mechanism against toxic arsenic species, such as arsenite, which consists of the selective intracellular recognition of arsenite and its pumping out from the cell. A whole-cell based biosensor can be produced by coupling the intracellular recognition of arsenite to the generation of an electrochemical signal. Hereto, E. coli was equipped with a genetic circuit in which synthesis of beta-galactosidase is under control of the arsenite-derepressable arsR-promoter. The E. coli reporter strain was filled in a microchip containing 16 independent electrochemical cells (i.e. two-electrode cell), which was then employed for analysis of tap and groundwater samples. The developed arsenic-sensitive electrochemical biochip is easy to use and outperforms state-of-the-art bacterial bioreporters assays specifically in its simplicity and response time, while keeping a very good limit of detection in tap water, i.e. 0.8ppb. Additionally, a very good linear response in the ranges of concentration tested (0.94ppb to 3.75ppb, R(2)=0.9975 and 3.75 ppb to 30ppb, R(2)=0.9991) was obtained, complying perfectly with the acceptable arsenic concentration limits defined by the World Health Organization for drinking water samples (i.e. 10ppb). Therefore, the proposed assay provides a very good alternative for the portable quantification of As (III) in water as corroborated by the analysis of natural groundwater samples from Swiss mountains, which showed a very good agreement with the results obtained by atomic absorption spectroscopy.
Resumo:
The stochastic-trajectory-analysis technique is applied to the calculation of the mean¿first-passage-time statistics for processes driven by external shot noise. Explicit analytical expressions are obtained for free and bound processes.
Resumo:
A major issue in the application of waveform inversion methods to crosshole georadar data is the accurate estimation of the source wavelet. Here, we explore the viability and robustness of incorporating this step into a time-domain waveform inversion procedure through an iterative deconvolution approach. Our results indicate that, at least in non-dispersive electrical environments, such an approach provides remarkably accurate and robust estimates of the source wavelet even in the presence of strong heterogeneity in both the dielectric permittivity and electrical conductivity. Our results also indicate that the proposed source wavelet estimation approach is relatively insensitive to ambient noise and to the phase characteristics of the starting wavelet. Finally, there appears to be little-to-no trade-off between the wavelet estimation and the tomographic imaging procedures.
Resumo:
We present numerical evidence and a theoretical analysis of the appearance of anticoherence resonance induced by noise, not predicted in former analysis of coherence resonance. We have found that this phenomenon occurs for very small values of the intensity of the noise acting on an excitable system, and we claim that this is a universal signature of a nonmonotonous relaxational behavior near its oscillatory regime. Moreover, we demonstrate that this new phenomenon is totally compatible with the standard situation of coherence resonance appearing at intermediate values of noise intensity.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
Gene set enrichment (GSE) analysis is a popular framework for condensing information from gene expression profiles into a pathway or signature summary. The strengths of this approach over single gene analysis include noise and dimension reduction, as well as greater biological interpretability. As molecular profiling experiments move beyond simple case-control studies, robust and flexible GSE methodologies are needed that can model pathway activity within highly heterogeneous data sets. To address this challenge, we introduce Gene Set Variation Analysis (GSVA), a GSE method that estimates variation of pathway activity over a sample population in an unsupervised manner. We demonstrate the robustness of GSVA in a comparison with current state of the art sample-wise enrichment methods. Further, we provide examples of its utility in differential pathway activity and survival analysis. Lastly, we show how GSVA works analogously with data from both microarray and RNA-seq experiments. GSVA provides increased power to detect subtle pathway activity changes over a sample population in comparison to corresponding methods. While GSE methods are generally regarded as end points of a bioinformatic analysis, GSVA constitutes a starting point to build pathway-centric models of biology. Moreover, GSVA contributes to the current need of GSE methods for RNA-seq data. GSVA is an open source software package for R which forms part of the Bioconductor project and can be downloaded at http://www.bioconductor.org.
Resumo:
Assessment of image quality for digital x-ray mammography systems used in European screening programs relies mainly on contrast-detail CDMAM phantom scoring and requires the acquisition and analysis of many images in order to reduce variability in threshold detectability. Part II of this study proposes an alternative method based on the detectability index (d') calculated for a non-prewhitened model observer with an eye filter (NPWE). The detectability index was calculated from the normalized noise power spectrum and image contrast, both measured from an image of a 5 cm poly(methyl methacrylate) phantom containing a 0.2 mm thick aluminium square, and the pre-sampling modulation transfer function. This was performed as a function of air kerma at the detector for 11 different digital mammography systems. These calculated d' values were compared against threshold gold thickness (T) results measured with the CDMAM test object and against derived theoretical relationships. A simple relationship was found between T and d', as a function of detector air kerma; a linear relationship was found between d' and contrast-to-noise ratio. The values of threshold thickness used to specify acceptable performance in the European Guidelines for 0.10 and 0.25 mm diameter discs were equivalent to threshold calculated detectability indices of 1.05 and 6.30, respectively. The NPWE method is a validated alternative to CDMAM scoring for use in the image quality specification, quality control and optimization of digital x-ray systems for screening mammography.
Resumo:
BACKGROUND: The quantification of total (free+sulfated) metanephrines in urine is recommended to diagnose pheochromocytoma. Urinary metanephrines include metanephrine itself, normetanephrine and methoxytyramine, mainly in the form of sulfate conjugates (60-80%). Their determination requires the hydrolysis of the sulfate ester moiety to allow electrochemical oxidation of the phenolic group. Commercially available urine calibrators and controls contain essentially free, unhydrolysable metanephrines which are not representative of native urines. The lack of appropriate calibrators may lead to uncertainty regarding the completion of the hydrolysis of sulfated metanephrines, resulting in incorrect quantification. METHODS: We used chemically synthesized sulfated metanephrines to establish whether the procedure most frequently recommended for commercial kits (pH 1.0 for 30 min over a boiling water bath) ensures their complete hydrolysis. RESULTS: We found that sulfated metanephrines differ in their optimum pH to obtain complete hydrolysis. Highest yields and minimal variance were established for incubation at pH 0.7-0.9 during 20 min. CONCLUSION: Urinary pH should be carefully controlled to ensure an efficient and reproducible hydrolysis of sulfated metanephrines. Synthetic sulfated metanephrines represent the optimal material for calibrators and proficiency testing to improve inter-laboratory accuracy.
Resumo:
Correspondència referida a l'article de R. Giannetti, publicat ibid. vol.49 p.87-88