929 resultados para Source analysis
Resumo:
As water quality interventions are scaled up to meet the Millennium Development Goal of halving the proportion of the population without access to safe drinking water by 2015 there has been much discussion on the merits of household- and source-level interventions. This study furthers the discussion by examining specific interventions through the use of embodied human and material energy. Embodied energy quantifies the total energy required to produce and use an intervention, including all upstream energy transactions. This model uses material quantities and prices to calculate embodied energy using national economic input/output-based models from China, the United States and Mali. Embodied energy is a measure of aggregate environmental impacts of the interventions. Human energy quantifies the caloric expenditure associated with the installation and operation of an intervention is calculated using the physical activity ratios (PARs) and basal metabolic rates (BMRs). Human energy is a measure of aggregate social impacts of an intervention. A total of four household treatment interventions – biosand filtration, chlorination, ceramic filtration and boiling – and four water source-level interventions – an improved well, a rope pump, a hand pump and a solar pump – are evaluated in the context of Mali, West Africa. Source-level interventions slightly out-perform household-level interventions in terms of having less total embodied energy. Human energy, typically assumed to be a negligible portion of total embodied energy, is shown to be significant to all eight interventions, and contributing over half of total embodied energy in four of the interventions. Traditional gender roles in Mali dictate the types of work performed by men and women. When the human energy is disaggregated by gender, it is seen that women perform over 99% of the work associated with seven of the eight interventions. This has profound implications for gender equality in the context of water quality interventions, and may justify investment in interventions that reduce human energy burdens.
Resumo:
In 2010 more than 600 radiocarbon samples were measured with the gas ion source at the MIni CArbon DAting System (MICADAS) at ETH Zurich and the number of measurements is rising quickly. While most samples contain less than 50 mu g C at present, the gas ion source is attractive as well for larger samples because the time-consuming graphitization is omitted. Additionally, modern samples are now measured down to 5 per-mill counting statistics in less than 30 min with the recently improved gas ion source. In the versatile gas handling system, a stepping-motor-driven syringe presses a mixture of helium and sample CO2 into the gas ion source, allowing continuous and stable measurements of different kinds of samples. CO2 can be provided in four different ways to the versatile gas interface. As a primary method. CO2 is delivered in glass or quartz ampoules. In this case, the CO2 is released in an automated ampoule cracker with 8 positions for individual samples. Secondly, OX-1 and blank gas in helium can be provided to the syringe by directly connecting gas bottles to the gas interface at the stage of the cracker. Thirdly, solid samples can be combusted in an elemental analyzer or in a thermo-optical OC/EC aerosol analyzer where the produced CO2 is transferred to the syringe via a zeolite trap for gas concentration. As a fourth method, CO2 is released from carbonates with phosphoric acid in septum-sealed vials and loaded onto the same trap used for the elemental analyzer. All four methods allow complete automation of the measurement, even though minor user input is presently still required. Details on the setup, versatility and applications of the gas handling system are given. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
While several studies have investigated winter-time air pollution with a wide range of concentration levels, hardly any results are available for longer time periods covering several winter-smog episodes at various locations; e.g., often only a few weeks from a single winter are investigated. Here, we present source apportionment results of winter-smog episodes from 16 air pollution monitoring stations across Switzerland from five consecutive winters. Radiocarbon (14C) analyses of the elemental (EC) and organic (OC) carbon fractions, as well as levoglucosan, major water-soluble ionic species and gas-phase pollutant measurements were used to characterize the different sources of PM10. The most important contributions to PM10 during winter-smog episodes in Switzerland were on average the secondary inorganic constituents (sum of nitrate, sulfate and ammonium = 41 ± 15%) followed by organic matter (OM) (34 ± 13%) and EC (5 ± 2%). The non-fossil fractions of OC (fNF,OC) ranged on average from 69 to 85 and 80 to 95% for stations north and south of the Alps, respectively, showing that traffic contributes on average only up to ~ 30% to OC. The non-fossil fraction of EC (fNF,EC), entirely attributable to primary wood burning, was on average 42 ± 13 and 49 ± 15% for north and south of the Alps, respectively. While a high correlation was observed between fossil EC and nitrogen oxides, both primarily emitted by traffic, these species did not significantly correlate with fossil OC (OCF), which seems to suggest that a considerable amount of OCF is secondary, from fossil precursors. Elevated fNF,EC and fNF,OC values and the high correlation of the latter with other wood burning markers, including levoglucosan and water soluble potassium (K+) indicate that residential wood burning is the major source of carbonaceous aerosols during winter-smog episodes in Switzerland. The inspection of the non-fossil OC and EC levels and the relation with levoglucosan and water-soluble K+ shows different ratios for stations north and south of the Alps (most likely because of differences in burning technologies) for these two regions in Switzerland.
Resumo:
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
Aug. 1979.
Resumo:
National Highway Traffic Safety Administration, Office of Research and Development, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Office of Research and Development, Washington, D.C.