938 resultados para methods of resolution enhancement
Resumo:
Developing countries are heavily burdened by limited access to safe drinking water and subsequent water-related diseases. Numerous water treatment interventions combat this public health crisis, encompassing both traditional and less-common methods. Of these, water disinfection serves as an important means to provide safe drinking water. Existing literature discusses a wide range of traditional treatment options and encourages the use of multi-barrier approaches including coagulation-flocculation, filtration, and disinfection. Most sources do not delve into approaches specifically appropriate for developing countries, nor do they exclusively examine water disinfection methods.^ The objective of this review is to focus on an extensive range of chemical, physio-chemical, and physical water disinfection techniques to provide a compilation, description and evaluation of options available. Such an objective provides further understanding and knowledge to better inform water treatment interventions and explores alternate means of water disinfection appropriate for developing countries. Appropriateness for developing countries corresponds to the effectiveness of an available, easy to use disinfection technique at providing safe drinking water at a low cost.^ Among chemical disinfectants, SWS sodium hypochlorite solution is preferred over sodium hypochlorite bleach due to consistent concentrations. Tablet forms are highly recommended chemical disinfectants because they are effective and very easy to use, but also because they are stable. Examples include sodium dichloroisocyanurate, calcium hypochlorite, and chlorine dioxide, which vary in cost depending on location and availability. Among physio-chemical disinfection options, electrolysis which produces mixed oxidants (MIOX) provides a highly effective disinfection option with a higher upfront cost but very low cost over the long term. Among physical disinfection options, solar disinfection (SODIS) applications are effective, but they treat only a fixed volume of water at a time. They come with higher initial costs but very low on-going costs. Additional effective disinfection techniques may be suitable depending on the location, availability and cost.^
Resumo:
A paleomagnetic study was made on the deep-marine sediments and volcanic rocks drilled by Ocean Drilling Program Leg 126 in the Izu-Bonin forearc region (Sites 787, 792, and 793). This study evaluates the sense and amount of the tectonic drift and rotation associated with the evolution of the Philippine Sea Plate and the Izu-Bonin Arc. Alternating-field and thermal demagnetization experiments show that most of the samples have stable remanence and are suitable for paleomagnetic studies. Paleomagnetic declinations were recovered by two methods of core orientation, one of which uses a secondary viscous magnetization vector of each specimen as an orientation standard, and the other of which is based on the data of downhole microresistivity measurement obtained by using a formation microscanner. Oligocene to early Miocene samples show 10° to 14° shallower paleolatitudes than those of the present. Middle Miocene to early Oligocene samples show progressive clockwise deflections (up to ~80°) in declination with time. These results suggest large northward drift and clockwise rotation of the Izu-Bonin forearc region since early Oligocene time. Considering previous paleomagnetic results from the other regions in the Philippine Sea, this motion may reflect large clockwise rotation of the whole Philippine Sea Plate over the past 40 m.y.
Resumo:
Twenty-one samples, ranging in depth from 0 to 150 meters below seafloor (mbsf), were obtained from Leg 123 Sites 765 and 766. All samples were tested for Atterberg limits: 14 for laboratory vane shear strength and seven for uniaxial consolidation. Based on the determined Atterberg limits, along with shipboard measurements of water content, the sediment appears to be underconsolidated from 0 to 40 mbsf at Site 765 and from 0 to 80 mbsf at Site 766. Normal consolidation trends were observed for the sediments below these depths. Vane shear strengths, when compared with calculated values for a normally consolidated clay, indicate underconsolidated sediment at both sites. However, the use of Atterberg limit and vane shear strength data to assess consolidation state is complicated by the presence of silt-sized calcium carbonate in the form of nannofossil ooze. Thus, uniaxial-consolidation test data were analyzed to determine the overconsolidation ratios (OCR) and sediment compressibilities. OCR values were found to be less than one (underconsolidated) at both sites, using two separate methods of analysis.
Resumo:
The application of thematic maps obtained through the classification of remote images needs the obtained products with an optimal accuracy. The registered images from the airplanes display a very satisfactory spatial resolution, but the classical methods of thematic classification not always give better results than when the registered data from satellite are used. In order to improve these results of classification, in this work, the LIDAR sensor data from first return (Light Detection And Ranging) registered simultaneously with the spectral sensor data from airborne are jointly used. The final results of the thematic classification of the scene object of study have been obtained, quantified and discussed with and without LIDAR data, after applying different methods: Maximum Likehood Classification, Support Vector Machine with four different functions kernel and Isodata clustering algorithm (ML, SVM-L, SVM-P, SVM-RBF, SVM-S, Isodata). The best results are obtained for SVM with Sigmoide kernel. These allow the correlation with others different physical parameters with great interest like Manning hydraulic coefficient, for their incorporation in a GIS and their application in hydraulic modeling.
Resumo:
At present there is much literature that refers to the advantages and disadvantages of different methods of statistical and dynamical downscaling of climate variables projected by climate models. Less attention has been paid to other indirect variables, like runoff, which play a significant role in evaluating the impact of climate change on hydrological systems. Runoff presents a much greater bias in climate models than other climate variables, like temperature or precipitation. It is very important to identify the methods that minimize bias while downscaling runoff from the gridded results of climate models to the basin scale
Resumo:
Two different methods of analysis of plate bending, FEM and BM are discussed in this paper. The plate behaviour is assumed to be represented by using the linear thin plate theory where the Poisson-Kirchoff assumption holds. The BM based in a weighted mean square error technique produced good results for the problem of plate bending. The computational effort demanded in the BM is smaller than the one needed in a FEM analysis for the same level of accuracy. The general application of the FEM cannot be matched by the BM. Particularly, different types of geometry (plates of arbitrary geometry) need a similar but not identical treatment in the BM. However, this loss of generality is counterbalanced by the computational efficiency gained in the BM in the solution achievement
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
The RuvC protein of Escherichia coli catalyzes the resolution of recombination intermediates during genetic recombination and the recombinational repair of damaged DNA. Resolution involves specific recognition of the Holliday structure to form a complex that exhibits twofold symmetry with the DNA in an open configuration. Cleavage occurs when strands of like polarity are nicked at the sequence 5'-WTT decreases S-3' (where W is A or T and S is G or C). To determine whether the cleavage site needs to be located at, or close to, the point at which DNA strands exchange partners, Holliday structures were constructed with the junction points at defined sites within this sequence. We found that the efficiency of resolution was optimal when the cleavage site was coincident with the position of DNA strand exchange. In these studies, junction targeting was achieved by incorporating uncharged methyl phosphonates into the DNA backbone, providing further evidence for the importance of charge-charge repulsions in determining DNA structure.
Resumo:
Despite considerable evidence that ethanol can enhance chloride flux through the gamma-aminobutyric acid type A (GABA/A/) receptor-channel complex in several central neuron types, the effect of ethanol on hippocampal GABAergic systems is still controversial. Therefore, we have reevaluated this interaction in hippocampal pyramidal neurons subjected to local monosynaptic activation combined with pharmacological isolation of the various components of excitatory and inhibitory synaptic potentials, using intracellular current- and voltage-clamp recording methods in the hippocampal slice. In accord with our previous findings, we found that ethanol had little effect on compound inhibitory postsynaptic potentials/currents (IPSP/Cs) containing both GABA/A/ and GABA/B/ components. However, after selective pharmacological blockade of the GABA/B/ component of the IPSP (GABA/B/-IPSP/C) by CGP-35348, low concentrations of ethanol (22-66 mM) markedly enhanced the peak amplitude, and especially the area, of the GABA/A/ component (GABA/A/-IPSP/C) in most CA1 pyramidal neurons. Ethanol had no significant effect on the peak amplitude or area of the pharmacologically isolated GABA/B/-inhibitory postsynaptic current (IPSC). These results provide new data showing that activation of GABAB receptors can obscure ethanol enhancement of GABA/A/ receptor function in hippocampus and suggest that similar methods of pharmacological isolation might be applied to other brain regions showing negative or mixed ethanol-GABA interactions.
Resumo:
Context. Nearby late-type stars are excellent targets for seeking young objects in stellar associations and moving groups. The origin of these structures is still misunderstood, and lists of moving group members often change with time and also from author to author. Most members of these groups have been identified by means of kinematic criteria, leading to an important contamination of previous lists by old field stars. Aims. We attempt to identify unambiguous moving group members among a sample of nearby-late type stars by studying their kinematics, lithium abundance, chromospheric activity, and other age-related properties. Methods. High-resolution echelle spectra (R ~ 57 000) of a sample of nearby late-type stars are used to derive accurate radial velocities that are combined with the precise Hipparcos parallaxes and proper motions to compute galactic-spatial velocity components. Stars are classified as possible members of the classical moving groups according to their kinematics. The spectra are also used to study several age-related properties for young late-type stars, i.e., the equivalent width of the lithium Li i 6707.8 Å line or the R'_HK index. Additional information like X-ray fluxes from the ROSAT All-Sky Survey or the presence of debris discs is also taken into account. The different age estimators are compared and the moving group membership of the kinematically selected candidates are discussed. Results. From a total list of 405 nearby stars, 102 have been classified as moving group candidates according to their kinematics. i.e., only ~25.2% of the sample. The number reduces when age estimates are considered, and only 26 moving group candidates (25.5% of the 102 candidates) have ages in agreement with the star having the same age as an MG member.
Resumo:
Subpixel methods increase the accuracy and efficiency of image detectors, processing units, and algorithms and provide very cost-effective systems for object tracking. Published methods achieve resolution increases up to three orders of magnitude. In this Letter, we demonstrate that this limit can be theoretically improved by several orders of magnitude, permitting micropixel and submicropixel accuracies. The necessary condition for movement detection is that one single pixel changes its status. We show that an appropriate target design increases the probability of a pixel change for arbitrarily small shifts, thus increasing the detection accuracy of a tracking system. The proposal does not impose severe restriction on the target nor on the sensor, thus allowing easy experimental implementation.
Resumo:
Purpose: To evaluate postoperative spectral-domain optical coherence tomography findings after macular hole surgery. Methods: Retrospective, interventional, nonrandomized study. Overall, 164 eyes of 157 patients diagnosed with macular hole were operated on by vitrectomy and internal limiting membrane peeling. Preoperative and postoperative best-corrected visual acuity and spectral-domain optical coherence tomography images were obtained. Two groups were considered on the basis of the postoperative integrity of the back reflection line from the ellipsoid portion of the photoreceptor inner segment: group A (disruption of ellipsoid portion of the inner segment line, 60 eyes) and group B (restoration of ellipsoid portion of the inner segment line, 104 eyes). Results: Logarithm of the minimum angle of resolution best-corrected visual acuity improved significantly after the surgery of macular hole from a mean preoperative value of 0.79 ± 0.37 (range, 0.15–2.00) to a mean postoperative value of 0.35 ± 0.31 (range, 0.00–1.30) at the last follow-up visit (P < 0.01). Best-corrected visual acuity improved significantly in the 2 groups analyzed (all P < 0.01). A larger improvement was found in group B than in group A (P < 0.01). Conclusion: Ellipsoid portion of the inner segment line reconstruction seems to be a good prognostic factor for visual rehabilitation after macular hole surgery.
Resumo:
BACKGROUND Contrast-enhanced (ce) fluid-attenuated inversion recovery magnetic resonance imaging (FLAIR MRI) has recently been shown to identify leptomeningeal pathology in multiple sclerosis. OBJECTIVE To demonstrate leptomeningeal enhancement on three-dimensional (3D) FLAIR in a case of Susac's syndrome. METHODS Leptomeningeal enhancement was correlated with clinical activity over 20 months and compared to retinal fluorescein angiography. RESULTS The size, number, and location of leptomeningeal enhancement varied over time and generally correlated with symptom severity. The appearance was remarkably similar to that of retinal vasculopathy. CONCLUSION Ce 3D FLAIR may aid in diagnosis and understanding of pathophysiology in Susac's syndrome and may serve as a biomarker for disease activity.
Resumo:
Application of quantitative micropaleontologic methods of sea-surface temperature (SST) estimation to the planktic foraminiferal faunas at Hole 548 in the northeast Atlantic between 3.5 and 2.0 Ma provide a detailed record of Pliocene surface temperature changes during the onset of major Northern Hemisphere glaciation. SST estimates indicate a relatively warm interval of "enhanced interglacials (warm peaks)" between 3.2 and 2.8 Ma followed by steady cooling with a significant step-like change toward colder temperatures occurring about 2.4 Ma. This record compares well with faunal and SST records from other North Atlantic sites.
Resumo:
Pt. II has title: River gardens; being an account of the best methods of cultivating fresh water plants in aquaria.