957 resultados para Statistical approach


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: In the analysis of effects by cell treatment such as drug dosing, identifying changes on gene network structures between normal and treated cells is a key task. A possible way for identifying the changes is to compare structures of networks estimated from data on normal and treated cells separately. However, this approach usually fails to estimate accurate gene networks due to the limited length of time series data and measurement noise. Thus, approaches that identify changes on regulations by using time series data on both conditions in an efficient manner are demanded. Methods: We propose a new statistical approach that is based on the state space representation of the vector autoregressive model and estimates gene networks on two different conditions in order to identify changes on regulations between the conditions. In the mathematical model of our approach, hidden binary variables are newly introduced to indicate the presence of regulations on each condition. The use of the hidden binary variables enables an efficient data usage; data on both conditions are used for commonly existing regulations, while for condition specific regulations corresponding data are only applied. Also, the similarity of networks on two conditions is automatically considered from the design of the potential function for the hidden binary variables. For the estimation of the hidden binary variables, we derive a new variational annealing method that searches the configuration of the binary variables maximizing the marginal likelihood. Results: For the performance evaluation, we use time series data from two topologically similar synthetic networks, and confirm that our proposed approach estimates commonly existing regulations as well as changes on regulations with higher coverage and precision than other existing approaches in almost all the experimental settings. For a real data application, our proposed approach is applied to time series data from normal Human lung cells and Human lung cells treated by stimulating EGF-receptors and dosing an anticancer drug termed Gefitinib. In the treated lung cells, a cancer cell condition is simulated by the stimulation of EGF-receptors, but the effect would be counteracted due to the selective inhibition of EGF-receptors by Gefitinib. However, gene expression profiles are actually different between the conditions, and the genes related to the identified changes are considered as possible off-targets of Gefitinib. Conclusions: From the synthetically generated time series data, our proposed approach can identify changes on regulations more accurately than existing methods. By applying the proposed approach to the time series data on normal and treated Human lung cells, candidates of off-target genes of Gefitinib are found. According to the published clinical information, one of the genes can be related to a factor of interstitial pneumonia, which is known as a side effect of Gefitinib.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The motivation for the work presented in this thesis is to retrieve profile information for the atmospheric trace constituents nitrogen dioxide (NO2) and ozone (O3) in the lower troposphere from remote sensing measurements. The remote sensing technique used, referred to as Multiple AXis Differential Optical Absorption Spectroscopy (MAX-DOAS), is a recent technique that represents a significant advance on the well-established DOAS, especially for what it concerns the study of tropospheric trace consituents. NO2 is an important trace gas in the lower troposphere due to the fact that it is involved in the production of tropospheric ozone; ozone and nitrogen dioxide are key factors in determining the quality of air with consequences, for example, on human health and the growth of vegetation. To understand the NO2 and ozone chemistry in more detail not only the concentrations at ground but also the acquisition of the vertical distribution is necessary. In fact, the budget of nitrogen oxides and ozone in the atmosphere is determined both by local emissions and non-local chemical and dynamical processes (i.e. diffusion and transport at various scales) that greatly impact on their vertical and temporal distribution: thus a tool to resolve the vertical profile information is really important. Useful measurement techniques for atmospheric trace species should fulfill at least two main requirements. First, they must be sufficiently sensitive to detect the species under consideration at their ambient concentration levels. Second, they must be specific, which means that the results of the measurement of a particular species must be neither positively nor negatively influenced by any other trace species simultaneously present in the probed volume of air. Air monitoring by spectroscopic techniques has proven to be a very useful tool to fulfill these desirable requirements as well as a number of other important properties. During the last decades, many such instruments have been developed which are based on the absorption properties of the constituents in various regions of the electromagnetic spectrum, ranging from the far infrared to the ultraviolet. Among them, Differential Optical Absorption Spectroscopy (DOAS) has played an important role. DOAS is an established remote sensing technique for atmospheric trace gases probing, which identifies and quantifies the trace gases in the atmosphere taking advantage of their molecular absorption structures in the near UV and visible wavelengths of the electromagnetic spectrum (from 0.25 μm to 0.75 μm). Passive DOAS, in particular, can detect the presence of a trace gas in terms of its integrated concentration over the atmospheric path from the sun to the receiver (the so called slant column density). The receiver can be located at ground, as well as on board an aircraft or a satellite platform. Passive DOAS has, therefore, a flexible measurement configuration that allows multiple applications. The ability to properly interpret passive DOAS measurements of atmospheric constituents depends crucially on how well the optical path of light collected by the system is understood. This is because the final product of DOAS is the concentration of a particular species integrated along the path that radiation covers in the atmosphere. This path is not known a priori and can only be evaluated by Radiative Transfer Models (RTMs). These models are used to calculate the so called vertical column density of a given trace gas, which is obtained by dividing the measured slant column density to the so called air mass factor, which is used to quantify the enhancement of the light path length within the absorber layers. In the case of the standard DOAS set-up, in which radiation is collected along the vertical direction (zenith-sky DOAS), calculations of the air mass factor have been made using “simple” single scattering radiative transfer models. This configuration has its highest sensitivity in the stratosphere, in particular during twilight. This is the result of the large enhancement in stratospheric light path at dawn and dusk combined with a relatively short tropospheric path. In order to increase the sensitivity of the instrument towards tropospheric signals, measurements with the telescope pointing the horizon (offaxis DOAS) have to be performed. In this circumstances, the light path in the lower layers can become very long and necessitate the use of radiative transfer models including multiple scattering, the full treatment of atmospheric sphericity and refraction. In this thesis, a recent development in the well-established DOAS technique is described, referred to as Multiple AXis Differential Optical Absorption Spectroscopy (MAX-DOAS). The MAX-DOAS consists in the simultaneous use of several off-axis directions near the horizon: using this configuration, not only the sensitivity to tropospheric trace gases is greatly improved, but vertical profile information can also be retrieved by combining the simultaneous off-axis measurements with sophisticated RTM calculations and inversion techniques. In particular there is a need for a RTM which is capable of dealing with all the processes intervening along the light path, supporting all DOAS geometries used, and treating multiple scattering events with varying phase functions involved. To achieve these multiple goals a statistical approach based on the Monte Carlo technique should be used. A Monte Carlo RTM generates an ensemble of random photon paths between the light source and the detector, and uses these paths to reconstruct a remote sensing measurement. Within the present study, the Monte Carlo radiative transfer model PROMSAR (PROcessing of Multi-Scattered Atmospheric Radiation) has been developed and used to correctly interpret the slant column densities obtained from MAX-DOAS measurements. In order to derive the vertical concentration profile of a trace gas from its slant column measurement, the AMF is only one part in the quantitative retrieval process. One indispensable requirement is a robust approach to invert the measurements and obtain the unknown concentrations, the air mass factors being known. For this purpose, in the present thesis, we have used the Chahine relaxation method. Ground-based Multiple AXis DOAS, combined with appropriate radiative transfer models and inversion techniques, is a promising tool for atmospheric studies in the lower troposphere and boundary layer, including the retrieval of profile information with a good degree of vertical resolution. This thesis has presented an application of this powerful comprehensive tool for the study of a preserved natural Mediterranean area (the Castel Porziano Estate, located 20 km South-West of Rome) where pollution is transported from remote sources. Application of this tool in densely populated or industrial areas is beginning to look particularly fruitful and represents an important subject for future studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the context of “testing laboratory” one of the most important aspect to deal with is the measurement result. Whenever decisions are based on measurement results, it is important to have some indication of the quality of the results. In every area concerning with noise measurement many standards are available but without an expression of uncertainty, it is impossible to judge whether two results are in compliance or not. ISO/IEC 17025 is an international standard related with the competence of calibration and testing laboratories. It contains the requirements that testing and calibration laboratories have to meet if they wish to demonstrate that they operate to a quality system, are technically competent and are able to generate technically valid results. ISO/IEC 17025 deals specifically with the requirements for the competence of laboratories performing testing and calibration and for the reporting of the results, which may or may not contain opinions and interpretations of the results. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. In this point of view, for a testing laboratory performing sound power measurement according to specific ISO standards and European Directives, the measurement of uncertainties is the most important factor to deal with. Sound power level measurement, according to ISO 3744:1994 , performed with a limited number of microphones distributed over a surface enveloping a source is affected by a certain systematic error and a related standard deviation. Making a comparison of measurement carried out with different microphone arrays is difficult because results are affected by systematic errors and standard deviation that are peculiarities of the number of microphones disposed on the surface, their spatial position and the complexity of the sound field. A statistical approach could give an overview of the difference between sound power level evaluated with different microphone arrays and an evaluation of errors that afflict this kind of measurement. Despite the classical approach that tend to follow the ISO GUM this thesis present a different point of view of the problem related to the comparison of result obtained from different microphone arrays.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

ABSTRACT This works aim was to test whether LTP-like features can also be measured in cell culture and by methods that allow to analyse a alrger number of cells. A suitable method for this purpose is calcium imaging. The rationale for this approach lies in the fact that LTP/LTD are dependent on changes in intracellular calcium concentrations. Calcium levels have been measured using the calcium sensitive dye fura-2, whose fluorescence spectrum changes upon formation of the [fura-2-Ca2+] complex. Our LTP-inducing protocol comprised of two glutamate stimuli of identical size and duration (50 mM, 30 s) which were separated by 35 min. We could demonstrate that such a stimulation pattern gives rise to approx. 25% larger calcium influx at the second stimulus. It has been shown than such a stimulation pattern gives rise to an average of 25% augmentation (potentiation) of the second response, with 69% of potentiated cells. This experimental paradigm shows the pharmacological properties of LTP, established by previous electrophysiological studies:- blocking of NMDARs and mGluRs eliminates LTP induction;- blocking of AMPARs and L-type VGCCs does not eliminate LTP induction. Having obtained a system for induction and following of LTP-like changes, a preliminary application example was performed. Its purpose was to investigate possible influence of nicotine and galanthamine on our potentiation effect. Nicotine (100 mM) was shown both to increase and to eliminate glutamate-induced potentiation. Galanthamine coapplication (0.5 mM) with nicotine and glutamate exerted no effect on nicotinic modulation. However, galanthamine coapplied with glutamate alone seems to augment glutamate-induced potentiation. An LTP model system presented here could be additionally refined, by variation of glutamate application times, and testing for dependence on various forms of protein kinases. Galanthamine effect would probably be better addressed by cell-to-cell measurements instead of statistical approach, with subsequent identification of the cell type. Alternatively, combined calcium imaging – electrophysiological experiments could be performed. Spatial and temporal properties of intracellular ion dynamics could be utilised as diagnostic tools of the physiological state of the cells, thereby finding its application in functional proteomics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In dieser Arbeit wurde die Elektronenemission von Nanopartikeln auf Oberflächen mittels spektroskopischen Photoelektronenmikroskopie untersucht. Speziell wurden metallische Nanocluster untersucht, als selbstorganisierte Ensembles auf Silizium oder Glassubstraten, sowie ferner ein Metall-Chalcogenid (MoS2) Nanoröhren-Prototyp auf Silizium. Der Hauptteil der Untersuchungen war auf die Wechselwirkung von fs-Laserstrahlung mit den Nanopartikeln konzentriert. Die Energie der Lichtquanten war kleiner als die Austrittsarbeit der untersuchten Proben, so dass Ein-Photonen-Photoemission ausgeschlossen werden konnte. Unsere Untersuchungen zeigten, dass ausgehend von einem kontinuierlichen Metallfilm bis hin zu Clusterfilmen ein anderer Emissionsmechanismus konkurrierend zur Multiphotonen-Photoemission auftritt und für kleine Cluster zu dominieren beginnt. Die Natur dieses neuen Mechanismus` wurde durch verschiedenartige Experimente untersucht. Der Übergang von einem kontinuierlichen zu einem Nanopartikelfilm ist begleitet von einer Zunahme des Emissionsstroms von mehr als eine Größenordnung. Die Photoemissions-Intensität wächst mit abnehmender zeitlicher Breite des Laserpulses, aber diese Abhängigkeit wird weniger steil mit sinkender Partikelgröße. Die experimentellen Resultate wurden durch verschiedene Elektronenemissions-Mechanismen erklärt, z.B. Multiphotonen-Photoemission (nPPE), thermionische Emission und thermisch unterstützte nPPE sowie optische Feldemission. Der erste Mechanismus überwiegt für kontinuierliche Filme und Partikel mit Größen oberhalb von mehreren zehn Nanometern, der zweite und dritte für Filme von Nanopartikeln von einer Größe von wenigen Nanometern. Die mikrospektroskopischen Messungen bestätigten den 2PPE-Emissionsmechanismus von dünnen Silberfilmen bei „blauer“ Laseranregung (hν=375-425nm). Das Einsetzen des Ferminiveaus ist relativ scharf und verschiebt sich um 2hν, wenn die Quantenenergie erhöht wird, wogegen es bei „roter“ Laseranregung (hν=750-850nm) deutlich verbreitert ist. Es zeigte sich, dass mit zunehmender Laserleistung die Ausbeute von niederenergetischen Elektronen schwächer zunimmt als die Ausbeute von höherenergetischen Elektronen nahe der Fermikante in einem Spektrum. Das ist ein klarer Hinweis auf eine Koexistenz verschiedener Emissionsmechanismen in einem Spektrum. Um die Größenabhängigkeit des Emissionsverhaltens theoretisch zu verstehen, wurde ein statistischer Zugang zur Lichtabsorption kleiner Metallpartikel abgeleitet und diskutiert. Die Elektronenemissionseigenschaften bei Laseranregung wurden in zusätzlichen Untersuchungen mit einer anderen Anregungsart verglichen, der Passage eines Tunnelstroms durch einen Metall-Clusterfilm nahe der Perkolationsschwelle. Die elektrischen und Emissionseigenschaften von stromtragenden Silberclusterfilmen, welche in einer schmalen Lücke (5-25 µm Breite) zwischen Silberkontakten auf einem Isolator hergestellt wurden, wurden zum ersten Mal mit einem Emissions-Elektronenmikroskop (EEM) untersucht. Die Elektronenemission beginnt im nicht-Ohmschen Bereich der Leitungsstrom-Spannungskurve des Clusterfilms. Wir untersuchten das Verhalten eines einzigen Emissionszentrums im EEM. Es zeigte sich, dass die Emissionszentren in einem stromleitenden Silberclusterfilm Punktquellen für Elektronen sind, welche hohe Emissions-Stromdichten (mehr als 100 A/cm2) tragen können. Die Breite der Energieverteilung der Elektronen von einem einzelnen Emissionszentrum wurde auf etwa 0.5-0.6 eV abgeschätzt. Als Emissionsmechanismus wird die thermionische Emission von dem „steady-state“ heißen Elektronengas in stromdurchflossenen metallischen Partikeln vorgeschlagen. Größenselektierte, einzelne auf Si-Substraten deponierte MoS2-Nanoröhren wurden mit einer Flugzeit-basierten Zweiphotonen-Photoemissions-Spektromikroskopie untersucht. Die Nanoröhren-Spektren wiesen bei fs-Laser Anregung eine erstaunlich hohe Emissionsintensität auf, deutlich höher als die SiOx Substratoberfläche. Dagegen waren die Röhren unsichtbar bei VUV-Anregung bei hν=21.2 eV. Eine ab-initio-Rechnung für einen MoS2-Slab erklärt die hohe Intensität durch eine hohe Dichte freier intermediärer Zustände beim Zweiphotonen-Übergang bei hν=3.1 eV.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Questa tesi di dottorato è inserita nell’ambito della convenzione tra ARPA_SIMC (che è l’Ente finanziatore), l’Agenzia Regionale di Protezione Civile ed il Dipartimento di Scienze della Terra e Geologico - Ambientali dell’Ateneo di Bologna. L’obiettivo principale è la determinazione di possibili soglie pluviometriche di innesco per i fenomeni franosi in Emilia Romagna che possano essere utilizzate come strumento di supporto previsionale in sala operativa di Protezione Civile. In un contesto geologico così complesso, un approccio empirico tradizionale non è sufficiente per discriminare in modo univoco tra eventi meteo innescanti e non, ed in generale la distribuzione dei dati appare troppo dispersa per poter tracciare una soglia statisticamente significativa. È stato quindi deciso di applicare il rigoroso approccio statistico Bayesiano, innovativo poiché calcola la probabilità di frana dato un certo evento di pioggia (P(A|B)) , considerando non solo le precipitazioni innescanti frane (quindi la probabilità condizionata di avere un certo evento di precipitazione data l’occorrenza di frana, P(B|A)), ma anche le precipitazioni non innescanti (quindi la probabilità a priori di un evento di pioggia, P(A)). L’approccio Bayesiano è stato applicato all’intervallo temporale compreso tra il 1939 ed il 2009. Le isolinee di probabilità ottenute minimizzano i falsi allarmi e sono facilmente implementabili in un sistema di allertamento regionale, ma possono presentare limiti previsionali per fenomeni non rappresentati nel dataset storico o che avvengono in condizioni anomale. Ne sono esempio le frane superficiali con evoluzione in debris flows, estremamente rare negli ultimi 70 anni, ma con frequenza recentemente in aumento. Si è cercato di affrontare questo problema testando la variabilità previsionale di alcuni modelli fisicamente basati appositamente sviluppati a questo scopo, tra cui X – SLIP (Montrasio et al., 1998), SHALSTAB (SHALlow STABility model, Montgomery & Dietrich, 1994), Iverson (2000), TRIGRS 1.0 (Baum et al., 2002), TRIGRS 2.0 (Baum et al., 2008).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Many medical exams use 5 options for multiple choice questions (MCQs), although the literature suggests that 3 options are optimal. Previous studies on this topic have often been based on non-medical examinations, so we sought to analyse rarely selected, 'non-functional' distractors (NF-D) in high stakes medical examinations, and their detection by item authors as well as psychometric changes resulting from a reduction in the number of options. Methods Based on Swiss Federal MCQ examinations from 2005-2007, the frequency of NF-D (selected by <1% or <5% of the candidates) was calculated. Distractors that were chosen the least or second least were identified and candidates who chose them were allocated to the remaining options using two extreme assumptions about their hypothetical behaviour: In case rarely selected distractors were eliminated, candidates could randomly choose another option - or purposively choose the correct answer, from which they had originally been distracted. In a second step, 37 experts were asked to mark the least plausible options. The consequences of a reduction from 4 to 3 or 2 distractors - based on item statistics or on the experts' ratings - with respect to difficulty, discrimination and reliability were modelled. Results About 70% of the 5-option-items had at least 1 NF-D selected by <1% of the candidates (97% for NF-Ds selected by <5%). Only a reduction to 2 distractors and assuming that candidates would switch to the correct answer in the absence of a 'non-functional' distractor led to relevant differences in reliability and difficulty (and to a lesser degree discrimination). The experts' ratings resulted in slightly greater changes compared to the statistical approach. Conclusions Based on item statistics and/or an expert panel's recommendation, the choice of a varying number of 3-4 (or partly 2) plausible distractors could be performed without marked deteriorations in psychometric characteristics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background This study addressed the temporal properties of personality disorders and their treatment by schema-centered group psychotherapy. It investigated the change mechanisms of psychotherapy using a novel method by which psychotherapy can be modeled explicitly in the temporal domain. Methodology and Findings 69 patients were assigned to a specific schema-centered behavioral group psychotherapy, 26 to social skills training as a control condition. The largest diagnostic subgroups were narcissistic and borderline personality disorder. Both treatments offered 30 group sessions of 100 min duration each, at a frequency of two sessions per week. Therapy process was described by components resulting from principal component analysis of patients' session-reports that were obtained after each session. These patient-assessed components were Clarification, Bond, Rejection, and Emotional Activation. The statistical approach focused on time-lagged associations of components using time-series panel analysis. This method provided a detailed quantitative representation of therapy process. It was found that Clarification played a core role in schema-centered psychotherapy, reducing rejection and regulating the emotion of patients. This was also a change mechanism linked to therapy outcome. Conclusions/Significance The introduced process-oriented methodology allowed to highlight the mechanisms by which psychotherapeutic treatment became effective. Additionally, process models depicted the actual patterns that differentiated specific diagnostic subgroups. Time-series analysis explores Granger causality, a non-experimental approximation of causality based on temporal sequences. This methodology, resting upon naturalistic data, can explicate mechanisms of action in psychotherapy research and illustrate the temporal patterns underlying personality disorders.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Among the many applications of microarray technology, one of the most popular is the identification of genes that are differentially expressed in two conditions. A common statistical approach is to quantify the interest of each gene with a p-value, adjust these p-values for multiple comparisons, chose an appropriate cut-off, and create a list of candidate genes. This approach has been criticized for ignoring biological knowledge regarding how genes work together. Recently a series of methods, that do incorporate biological knowledge, have been proposed. However, many of these methods seem overly complicated. Furthermore, the most popular method, Gene Set Enrichment Analysis (GSEA), is based on a statistical test known for its lack of sensitivity. In this paper we compare the performance of a simple alternative to GSEA.We find that this simple solution clearly outperforms GSEA.We demonstrate this with eight different microarray datasets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Two of the indicators of the UN Millennium Development Goals ensuring environmental sustainability are energy use and per capita carbon dioxide emissions. The increasing urbanization and increasing world population may require increased energy use in order to transport enough safe drinking water to communities. In addition, the increase in water use would result in increased energy consumption, thereby resulting in increased green-house gas emissions that promote global climate change. The study of multiple Municipal Drinking Water Distribution Systems (MDWDSs) that relates various MDWDS aspects--system components and properties--to energy use is strongly desirable. The understanding of the relationship between system aspects and energy use aids in energy-efficient design. In this study, components of a MDWDS, and/or the characteristics associated with the component are termed as MDWDS aspects (hereafter--system aspects). There are many aspects of MDWDSs that affect the energy usage. Three system aspects (1) system-wide water demand, (2) storage tank parameters, and (3) pumping stations were analyzed in this study. The study involved seven MDWDSs to understand the relationship between the above-mentioned system aspects in relation with energy use. A MDWDSs model, EPANET 2.0, was utilized to analyze the seven systems. Six of the systems were real and one was a hypothetical system. The study presented here is unique in its statistical approach using seven municipal water distribution systems. The first system aspect studied was system-wide water demand. The analysis involved analyzing seven systems for the variation of water demand and its impact on energy use. To quantify the effects of water use reduction on energy use in a municipal water distribution system, the seven systems were modeled and the energy usage quantified for various amounts of water conservation. It was found that the effect of water conservation on energy use was linear for all seven systems and that all the average values of all the systems' energy use plotted on the same line with a high R 2 value. From this relationship, it can be ascertained that a 20% reduction in water demand results in approximately a 13% savings in energy use for all seven systems analyzed. This figure might hold true for many similar systems that are dominated by pumping and not gravity driven. The second system aspect analyzed was storage tank(s) parameters. Various tank parameters: (1) tank maximum water levels, (2) tank elevation, and (3) tank diameter were considered in this part of the study. MDWDSs use a significant amount of electrical energy for the pumping of water from low elevations (usually a source) to higher ones (usually storage tanks). The use of electrical energy has an effect on pollution emissions and, therefore, potential global climate change as well. Various values of these tank parameters were modeled on seven MDWDSs of various sizes using a network solver and the energy usage recorded. It was found that when averaged over all seven analyzed systems (1) the reduction of maximum tank water level by 50% results in a 2% energy reduction, (2) energy use for a change in tank elevation is system specific, and (2) a reduction of tank diameter of 50% results in approximately a 7% energy savings. The third system aspect analyzed in this study was pumping station parameters. A pumping station consists of one or more pumps. The seven systems were analyzed to understand the effect of the variation of pump horsepower and the number of booster stations on energy use. It was found that adding booster stations could save energy depending upon the system characteristics. For systems with flat topography, a single main pumping station was found to use less energy. In systems with a higher-elevation neighborhood, however, one or more booster pumps with a reduced main pumping station capacity used less energy. The energy savings for the seven systems was dependent on the number of boosters and ranged from 5% to 66% for the analyzed five systems with higher elevation neighborhoods (S3, S4, S5, S6, and S7). No energy savings was realized for the remaining two flat topography systems, S1, and S2. The present study analyzed and established the relationship between various system aspects and energy use in seven MDWDSs. This aids in estimating the amount of energy savings in MDWDSs. This energy savings would ultimately help reduce Greenhouse gases (GHGs) emissions including per capita CO 2 emissions thereby potentially lowering the global climate change effect. This will in turn contribute to meeting the MDG of ensuring environmental sustainability.