337 resultados para ancillary


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main aim of this study was to look at the association of Clostridium difficile infection (CDI) and HIV. A secondary goal was to look at the trend of CDI-related deaths in Texas from 1999-2011. To evaluate the coinfection of CDI and HIV, we looked at 2 datasets provided by CHS-TDSHS, for 13 years of study period from 1999-2011: 1) Texas death certificate data and 2) Texas hospital discharge data. An ancillary source of data was national level death data from CDC. We did a secondary data analysis and reported the age-adjusted death rates (mortality) and hospital discharge frequencies (morbidity) for CDI, HIV and for CDI+HIV coinfection.^ Since the turn of the century, CDI has reemerged as an important public health challenge due to the emergence of hypervirulent epidemic strains. From 1999-2011, there has been a significant upward trend in CDI-related death rates; in the state of Texas alone, CDI mortality rate has increased 8.7 fold in this time period at the rate of 0.2 deaths per year per 100,000 individuals. On the contrary, mortality due to HIV has decreased by 46% and has been trending down. The demographic groups in Texas with the highest CDI mortality rates were elderly aged 65+, males, whites and hospital inpatients. The epidemiology of C. difficile has changed in such a way that it is not only staying confined to these traditional high-risk groups, but is also being increasingly reported in low-risk populations such as healthy people in the community (community acquired C. difficile), and most recently immunocompromised patients. Among the latter, HIV can worsen the adverse health outcomes of CDI and vice versa. In patients with CDI and HIV coinfection, higher mortality and morbidity was found in young & middle-aged adults, blacks and males, the same demographic population that is at higher risk for HIV. As with typical CDI, the coinfection was concentrated in the hospital inpatients. Of all the CDI-related deaths in USA from 1999-2010, in the 25-44 year age group, 13% had HIV infection. Of all CDI-related inpatient hospital discharges in Texas from 1999-2011, in patients 44 years and younger, 17% had concomitant HIV infection. Therefore, HIV is a possible novel emerging risk factor for CDI.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This investigation compares two different methodologies for calculating the national cost of epilepsy: provider-based survey method (PBSM) and the patient-based medical charts and billing method (PBMC&BM). The PBSM uses the National Hospital Discharge Survey (NHDS), the National Hospital Ambulatory Medical Care Survey (NHAMCS) and the National Ambulatory Medical Care Survey (NAMCS) as the sources of utilization. The PBMC&BM uses patient data, charts and billings, to determine utilization rates for specific components of hospital, physician and drug prescriptions. ^ The 1995 hospital and physician cost of epilepsy is estimated to be $722 million using the PBSM and $1,058 million using the PBMC&BM. The difference of $336 million results from $136 million difference in utilization and $200 million difference in unit cost. ^ Utilization. The utilization difference of $136 million is composed of an inpatient variation of $129 million, $100 million hospital and $29 million physician, and an ambulatory variation of $7 million. The $100 million hospital variance is attributed to inclusion of febrile seizures in the PBSM, $−79 million, and the exclusion of admissions attributed to epilepsy, $179 million. The former suggests that the diagnostic codes used in the NHDS may not properly match the current definition of epilepsy as used in the PBMC&BM. The latter suggests NHDS errors in the attribution of an admission to the principal diagnosis. ^ The $29 million variance in inpatient physician utilization is the result of different per-day-of-care physician visit rates, 1.3 for the PBMC&BM versus 1.0 for the PBSM. The absence of visit frequency measures in the NHDS affects the internal validity of the PBSM estimate and requires the investigator to make conservative assumptions. ^ The remaining ambulatory resource utilization variance is $7 million. Of this amount, $22 million is the result of an underestimate of ancillaries in the NHAMCS and NAMCS extrapolations using the patient visit weight. ^ Unit cost. The resource cost variation is $200 million, inpatient is $22 million and ambulatory is $178 million. The inpatient variation of $22 million is composed of $19 million in hospital per day rates, due to a higher cost per day in the PBMC&BM, and $3 million in physician visit rates, due to a higher cost per visit in the PBMC&BM. ^ The ambulatory cost variance is $178 million, composed of higher per-physician-visit costs of $97 million and higher per-ancillary costs of $81 million. Both are attributed to the PBMC&BM's precise identification of resource utilization that permits accurate valuation. ^ Conclusion. Both methods have specific limitations. The PBSM strengths are its sample designs that lead to nationally representative estimates and permit statistical point and confidence interval estimation for the nation for certain variables under investigation. However, the findings of this investigation suggest the internal validity of the estimates derived is questionable and important additional information required to precisely estimate the cost of an illness is absent. ^ The PBMC&BM is a superior method in identifying resources utilized in the physician encounter with the patient permitting more accurate valuation. However, the PBMC&BM does not have the statistical reliability of the PBSM; it relies on synthesized national prevalence estimates to extrapolate a national cost estimate. While precision is important, the ability to generalize to the nation may be limited due to the small number of patients that are followed. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This data was collected during a cruise across Drake Passage in the Southern Ocean in February 2009. This data consists of coccolithophore abundance, calcification and primary production rates, carbonate chemistry parameters and ancillary data of macronutrients, chlorophyll-a, average mixed layer irradiance, daily irradiance above the sea surface, euphotic and mixed layer depth, temperature and salinity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Wadden Sea is located in the southeastern part of the North Sea forming an extended intertidal area along the Dutch, German and Danish coast. It is a highly dynamic and largely natural ecosystem influenced by climatic changes and anthropogenic use of the North Sea. Changes in the environment of the Wadden Sea, natural or anthropogenic origin, cannot be monitored by the standard measurement methods alone, because large-area surveys of the intertidal flats are often difficult due to tides, tidal channels and unstable underground. For this reason, remote sensing offers effective monitoring tools. In this study a multi-sensor concept for classification of intertidal areas in the Wadden Sea has been developed. Basis for this method is a combined analysis of RapidEye (RE) and TerraSAR-X (TSX) satellite data coupled with ancillary vector data about the distribution of vegetation, mussel beds and sediments. The classification of the vegetation and mussel beds is based on a decision tree and a set of hierarchically structured algorithms which use object and texture features. The sediments are classified by an algorithm which uses thresholds and a majority filter. Further improvements focus on radiometric enhancement and atmospheric correction. First results show that we are able to identify vegetation and mussel beds with the use of multi-sensor remote sensing. The classification of the sediments in the tidal flats is a challenge compared to vegetation and mussel beds. The results demonstrate that the sediments cannot be classified with high accuracy by their spectral properties alone due to their similarity which is predominately caused by their water content.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surface elevation maps of the southern half of the Greenland subcontinent are produced from radar altimeter data acquired by the Seasat satellite. A summary of the processing procedure and examples of return waveform data are given. The elevation data are used to generate a regular grid which is then computer contoured to provide an elevation contour map. Ancillary maps show the statistical quality of the elevation data and various characteristics of the surface. The elevation map is used to define ice flow directions and delineate the major drainage basins. Regular maps of the Jakobshavns Glacier drainage basin and the ice divide in the vicinity of Crete Station are presented. Altimeter derived elevations are compared with elevations measured both by satellite geoceivers and optical surveying.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microzooplankton (the 20 to 200 µm size class of zooplankton) is recognised as an important part of marine pelagic ecosystems. In terms of biomass and abundance heterotrophic dinoflagellates are one of the important groups of organism in microzooplankton. However, their rates - grazing and growth - , feeding behaviour and prey preferences are poorly known and understood. A set of data was assembled in order to derive a better understanding of heterotrophic dinoflagellates rates, in response to parameters such as prey concentration, prey type (size and species), temperature and their own size. With these objectives, literature was searched for laboratory experiments with information on one or more of these parameters effect studied. The criteria for selection and inclusion in the database included: (i) controlled laboratory experiment with a known dinoflagellate feeding on a known prey; (ii) presence of ancillary information about experimental conditions, used organisms - cell volume, cell dimensions, and carbon content. Rates and ancillary information were measured in units that meet the experimenter need, creating a need to harmonize the data units after collection. In addition different units can link to different mechanisms (carbon to nutritive quality of the prey, volume to size limits). As a result, grazing rates are thus available as pg C dinoflagellate-1 h-1, µm3 dinoflagellate-1 h-1 and prey cell dinoflagellate-1 h-1; clearance rate was calculated if not given and growth rate is expressed as the growth rate per day.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Records of the past neodymium (Nd) isotope composition of the deep ocean can resolve ambiguities in the interpretation of other tracers. We present the first Nd isotope data for sedimentary benthic foraminifera. Comparison of the epsilon-Nd of core-top foraminifera from a depth transect on the Cape Basin side of the Walvis Ridge to published seawater data, and to the modern dissolved SiO2- epsilon-Nd trend of the deep Atlantic, suggests that benthic foraminifera represent a reliable archive of the deep water Nd isotope composition. Neodymium isotope values of benthic foraminifera from ODP Site 1264A (Angola Basin side of the Walvis Ridge) from the last 8 Ma agree with Fe-Mn oxide coatings from the same samples and are also broadly consistent with existing fish teeth data for the deep South Atlantic, yielding confidence in the preservation of the marine Nd isotope signal in all these archives. The marine origin of the Nd in the coatings is confirmed by their marine Sr isotope values. These important results allow application of the technique to down-core samples. The new epsilon-Nd datasets, along with ancillary Cd/Ca and Nd/Ca ratios from the same foraminiferal samples, are interpreted in the context of debates on the Neogene history of North Atlantic Deep Water (NADW) export to the South Atlantic. In general, the epsilon-Nd and delta13C records are closely correlated over the past 4.5 Ma. The Nd isotope data suggest strong NADW export from 8 to 5 Ma, consistent with one interpretation of published delta13C gradients. Where the epsilon-Nd record differs from the nutrient-based records, changes in the pre-formed delta13C or Cd/Ca of southern-derived deep water might account for the difference. Maximum NADW-export for the entire record is suggested by all proxies at 3.5-4 Ma. Chemical conditions from 3 to 1 Ma are totally different, showing, on average, the lowest NADW export of the record. Modern-day values again imply NADW export that is about as strong as at any stage over the past 8 Ma.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Parameters in the photosynthesis-irradiance (P-E) relationship of phytoplankton were measured at weekly to bi-weekly intervals for 20 yr at 6 stations on the Rhode River, Maryland (USA). Variability in the light-saturated photosynthetic rate, PBmax, was partitioned into interannual, seasonal, and spatial components. The seasonal component of the variance was greatest, followed by interannual and then spatial. Physiological models of PBmax based on balanced growth or photoacclimation predicted the overall mean and most of the range, but not individual observations, and failed to capture important features of the seasonal and interannual variability. PBmax correlated most strongly with temperature and the concentration of dissolved inorganic carbon (IC), with lesser correlations with chlorophyll a, diffuse attenuation coefficient, and a principal component of the species composition. In statistical models, temperature and IC correlated best with the seasonal pattern, but temperature peaked in late July, out of phase with PBmax, which peaked in September, coincident with the maximum in monthly averaged IC concentration. In contrast with the seasonal pattern, temperature did not contribute to interannual variation, which instead was governed by IC and the additional lesser correlates. Spatial variation was relatively weak and uncorrelated with ancillary measurements. The results demonstrate that both the overall distribution of PBmax and its relationship with environmental correlates may vary from year to year. Coefficients in empirical statistical models became stable after including 7 to 10 yr of data. The main correlates of PBmax are amenable to automated monitoring, so that future estimates of primary production might be made without labor-intensive incubations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Microzooplankton (the 20 to 200 µm size class of zooplankton) is recognised as an important part of marine pelagic ecosystems. In terms of biomass and abundance pelagic ciliates are one of the important groups of organism in microzooplankton. However, their rates - grazing and growth - , feeding behaviour and prey preferences are poorly known and understood. A set of data was assembled in order to derive a better understanding of pelagic ciliates rates, in response to parameters such as prey concentration, prey type (size and species), temperature and their own size. With these objectives, literature was searched for laboratory experiments with information on one or more of these parameters effect studied. The criteria for selection and inclusion in the database included: (i) controlled laboratory experiment with a known ciliates feeding on a known prey; (ii) presence of ancillary information about experimental conditions, used organisms - cell volume, cell dimensions, and carbon content. Rates and ancillary information were measured in units that meet the experimenter need, creating a need to harmonize the data units after collection. In addition different units can link to different mechanisms (carbon to nutritive quality of the prey, volume to size limits). As a result, grazing rates are thus available as pg C/(ciliate*h), µm**3/(ciliate*h) and prey cell/(ciliate*h); clearance rate was calculated if not given and growth rate is expressed as the growth rate per day.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Los polímeros armados con fibras (FRP) se utilizan en refuerzos de estructuras de hormigón debido sobre todo a sus excelentes propiedades mecánicas, su resistencia a la corrosión y a su ligereza que se traduce en facilidad y ahorro en el transporte, puesta en obra y aplicación, la cual se realiza de forma muy rápida, con pocos operarios y utilizando medios auxiliares ligeros, minimizándose las interrupciones del uso de la estructura y las molestias a los usuarios. Las razones presentadas anteriormente, han despertado un gran inter´es por parte de diferentes grupos de investigación a nivel mundial y que actualmente se encuentran desarrollando nuevas técnicas de aplicación y métodos de cálculo. Sin embargo, las investigaciones realizadas hasta la fecha, muestran un procedimiento bien definido y aceptado en lo referente al cálculo a flexión, lo cual no ocurre con el refuerzo a cortante y aunque se ha demostrado que el refuerzo con FRP es un sistema eficaz para incrementar la capacidad ´ultima frente a esfuerzos cortantes, también se pone de manifiesto la necesidad de más estudios experimentales y teóricos para avanzar en el entendimiento de los mecanismos involucrados para este tipo de refuerzo y establecer un procedimiento de diseño apropiado que maximice las excelentes propiedades de este material. Los modelos que explican el comportamiento del refuerzo a cortante de elementos de hormigón armado son complejos y sin transposición directa a fórmulas ingenieriles. Las normas actualmente en vigor, generalmente, establecen empíricamente la capacidad cortante como la suma de las capacidades del hormigón y el refuerzo transversal de acero. Cuando un elemento es reforzado externamente con FRP, los modelos son evidentemente aun más complejos. Las guías y recomendaciones existentes proponen calcular la capacidad del elemento añadiendo la resistencia aportada por el refuerzo externo de FRP a la ya dada por el hormigón y acero transversal. Sin embargo, la idoneidad de este acercamiento es cuestionable puesto que no tiene en cuenta una posible interacción entre refuerzos. Con base en lo anterior se da origen al tema objeto de este trabajo, el cual está orientado al estudio a cortante de elementos de hormigón armado (HA), reforzados externamente con material compuesto de tejido unidireccional de fibra de carbono y resina epoxi. Inicialmente se hace una completa revisión del estado actual del conocimiento de la resistencia a cortante en elementos de hormigón armado con y sin refuerzo externo de FRP, prestando especial atención en los mecanismos actuantes estudiados hasta la fecha. La bibliografía consultada ha sido exhaustiva y actualizada lo que ha permitido el estudio de los modelos propuestos más importantes, tanto para la descripción del fenómeno de adherencia entre hormigón-FRP como de la valoración del aporte al cortante total hecho por el FRP, a través de sendas bases de datos de ensayos de pull-out y de vigas de hormigón armado ensayadas a cortante. Con base en todo lo anterior, se expusieron los mecanismos actuantes en el aporte a cortante hecho por el FRP en elementos de hormigón armado y la forma como las principales guías de cálculo existentes hasta la fecha los abordan. De igual forma se define un modelo de resistencia de esfuerzos para el FRP y se proponen dos modelos para el cálculo de las tensiones o deformaciones efectivas, de los cuales uno esta basado en el modelo de adherencia propuesto por Oller (2005) y el otro en una regresión multivariante para los mecanismos expuestos. Como complemento del estudio de los trabajos encontrados en la literatura, se lleva acabo un programa experimental que, además de aportar más registros a la exigua base de datos existentes, aporte mayor luz a los puntos que se consideran están deficientemente resueltos. Dentro de este programa se realizaron 32 ensayos sobre 16 vigas de 4.5 m de longitud (dos ensayos por viga), reforzadas a cortante con tejido unidireccional de CFRP. Finalmente, estos estudios han permitido proponer modificaciones a las formulaciones existentes en los códigos y guías en vigor. Abstract Its excellent mechanical properties, as well as its corrosion resistance and light weight, which make it easy to apply and inexpensive to ship to the worksite, are the basis of the extended use of fiber reinforced polymer (FRP) as external strengthening for structures. FRP strengthening is a rapid operation calling for only limited labor and lightweight ancillary equipment, all of which minimizes both the interruption of facility usage and user inconvenience. These advantages have aroused considerable interest in civil engineering science and technology and have led to countless applications the world over. Research studies on the shear strength of FRP-strengthened members have been much fewer in number and more controversial than the research on flexural strengthening, for which a more or less standardized and generally accepted procedure has been established. The research conducted and a host of applications around the world have shown that FRP strengthening is an effective technique for raising ultimate shear strength, but it has also revealed a need for further experimental and theoretical research to advance in the understanding of the mechanisms involved and establish suitable design procedures that optimize the excellent properties of this material The models that explain reinforced concrete (RC) shear strength behavior are complex and cannot be directly transposed to engineering formulas. The standards presently in place generally establish shear capacity empirically as the sum of the capacities of the concrete and the passive reinforcement. When members are externally strengthened with FRP, the models are obviously even more complex. The existing guides and recommendations propose calculating capacity by adding the external strength provided by the FRP to the contributions of the concrete and passive reinforcement. The suitability of this approach is questionable, however, because it fails to consider the interaction between passive reinforcement and external strengthening. The subject of this work is based in above, which is focused on externally shear strengthening for reinforced concrete members with unidirectional carbon fiber sheets bonded with epoxy resin. v Initially a thorough literature review on shear of reinforced concrete beams with and without external FRP strengthening was performed, paying special attention to the acting mechanisms studied to date, which allowed the study of the most important models both to describe the bond phenomenon as well as calculating the FRP shear contribution, through separate databases of pull-out tests and shear tests on reinforced concrete beams externally strengthened with FRP. Based on above, they were exposed the acting mechanisms in a FRP shear strengthening on reinforced concrete beams and how guidelines deal the topic. The same way, it is defined a FRP stress strength model and two more models are proposed for calculating the effective stress, one of these is based on the Oller (2005) bond model and another one is the data best fit, taking into account most of the acting mechanisms. To complement the theoretical part we develop an experimental program that, in addition to providing more records to the meager existing database provide greater understanding to the points considered poorly resolved. The test program included 32 tests of 16 beams (2 per beam) of 4.5 m long, shear strengthened with FRP, externally. Finally, modifications to the existing codes and guidelines are proposed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Competitive abstract machines for Prolog are usually large, intricate, and incorpórate sophisticated optimizations. This makes them difñcult to code, optimize, and, especially, maintain and extend. This is partly due to the fact that efñciency considerations make it necessary to use low-level languages in their implementation. Writing the abstract machine (and ancillary code) in a higher-level language can help harness this inherent complexity. In this paper we show how the semantics of basic components of an efficient virtual machine for Prolog can be described using (a variant of) Prolog which retains much of its semantics. These descriptions are then compiled to C and assembled to build a complete bytecode emulator. Thanks to the high level of the language used and its closeness to Prolog the abstract machine descriptions can be manipulated using standard Prolog compilation and optimization techniques with relative ease. We also show how, by applying program transformations selectively, we obtain abstract machine implementations whose performance can match and even exceed that of highly-tuned, hand-crafted emulators.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe the current status of and provide preliminary performance results for a compiler of Prolog to C. The compiler is novel in that it is designed to accept different kinds of high-level information (typically obtained via an analysis of the initial Prolog program and expressed in a standardized language of assertions) and use this information to optimize the resulting C code, which is then further processed by an off-the-shelf C compiler. The basic translation process used essentially mimics an unfolding of a C-coded bytecode emúlator with respect to the particular bytecode corresponding to the Prolog program. Optimizations are then applied to this unfolded program. This is facilitated by a more flexible design of the bytecode instructions and their lower-level components. This approach allows reusing a sizable amount of the machinery of the bytecode emulator: ancillary pieces of C code, data definitions, memory management routines and áreas, etc., as well as mixing bytecode emulated code with natively compiled code in a relatively straightforward way We report on the performance of programs compiled by the current versión of the system, both with and without analysis information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La presente tesis doctoral tiene por objeto el estudio y análisis de técnicas y modelos de obtención de parámetros biofísicos e indicadores ambientales, de manera automatizada a partir de imágenes procedentes de satélite de alta resolución temporal. En primer lugar se revisan los diferentes programas espaciales de observación del territorio, con especial atención a los que proporcionan dicha resolución. También se han revisado las metodologías y procesos que permiten la obtención de diferentes parámetros cuantitativos y documentos cualitativos, relacionados con diversos aspectos de las cubiertas terrestres, atendiendo a su adaptabilidad a las particularidades de los datos. En segundo lugar se propone un modelo de obtención de parámetros ambientales, que integra información proveniente de sensores espaciales y de otras fuentes auxiliares utilizando, en cierta medida, las metodologías presentadas en apartados anteriores y optimizando algunas de las referidas o proponiendo otras nuevas, de manera que se permita dicha obtención de manera eficiente, a partir de los datos disponibles y de forma sistemática. Tras esta revisión de metodologías y propuesta del modelo, se ha procedido a la realización de experimentos, con la finalidad de comprobar su comportamiento en diferentes casos prácticos, depurar los flujos de datos y procesos, así como establecer las situaciones que pueden afectar a los resultados. De todo ello se deducirá la evaluación del referido modelo. Los sensores considerados en este trabajo han sido MODIS, de alta resolución temporal y Thematic Mapper (TM), de media resolución espacial, por tratarse de instrumentos de referencia en la realización de estudios ambientales. También por la duración de sus correspondientes misiones de registro de datos, lo que permite realizar estudios de evolución temporal de ciertos parámetros biofísicos, durante amplios periodos de tiempo. Así mismo. es de destacar que la continuidad de los correspondientes programas parece estar asegurada. Entre los experimentos realizados, se ha ensayado una metodología para la integración de datos procedentes de ambos sensores. También se ha analizado un método de interpolación temporal que permite obtener imágenes sintéticas con la resolución espacial de TM (30 m) y la temporal de MODIS (1 día), ampliando el rango de aplicación de este último sensor. Asimismo, se han analizado algunos de los factores que afectan a los datos registrados, tal como la geometría de la toma de los mismos y los episodios de precipitación, los cuales alteran los resultados obtenidos. Por otro lado, se ha comprobado la validez del modelo propuesto en el estudio de fenómenos ambientales dinámicos, en concreto la contaminación orgánica de aguas embalsadas. Finalmente, se ha demostrado un buen comportamiento del modelo en todos los casos ensayados, así como su flexibilidad, lo que le permite adaptarse a nuevos orígenes de datos, o nuevas metodologías de cálculo. Abstract This thesis aims to the study and analysis of techniques and models, in order to obtain biophysical parameters and environmental indicators in an automated way, using high temporal resolution satellite data. Firstly we have reviewed the main Earth Observation Programs, paying attention to those that provide high temporal resolution. Also have reviewed the methodologies and process flow diagrams in order to obtain quantitative parameters and qualitative documents, relating to various aspects of land cover, according to their adaptability to the peculiarities of the data. In the next stage, a model which allows obtaining environmental parameters, has been proposed. This structure integrates information from space sensors and ancillary data sources, using the methodologies presented in previous sections that permits the parameters calculation in an efficient and automated way. After this review of methodologies and the proposal of the model, we proceeded to carry out experiments, in order to check the behavior of the structure in real situations. From this, we derive the accuracy of the model. The sensors used in this work have been MODIS, which is a high temporal resolution sensor, and Thematic Mapper (TM), which is a medium spatial resolution instrument. This choice was motivated because they are reference sensors in environmental studies, as well as for the duration of their corresponding missions of data logging, and whose continuity seems assured. Among the experiments, we tested a methodology that allows the integration of data from cited sensors, we discussed a proposal for a temporal interpolation method for obtaining synthetic images with spatial resolution of TM (30 m) and temporal of MODIS (1 day), extending the application range of this one. Furthermore, we have analyzed some of the factors that affect the recorded data, such as the relative position of the satellite with the ground point, and the rainfall events, which alter the obtained results. On the other hand, we have proven the validity of the proposed model in the study of the organic contamination in inland water bodies. Finally, we have demonstrated a good performance of the proposed model in all cases tested, as well as its flexibility and adaptability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En el artículo se discute el papel de la energía hidroeléctrica en el marco del sistema eléctrico español, donde existe una elevada penetración de energías no gestionables con una tendencia clara a aumentar en los próximos años. El desarrollo de nuevas centrales hidroeléctricas se basará probablemente en centrales reversibles. La energía hidroeléctrica es una tecnología madura y eficiente para el almacenamiento de energía a gran escala y contribuye por tanto de manera decisiva a la integración de fuentes renovables no gestionables. Los beneficios obtenidos con la operación punta-valle pueden ser insuficientes para compensar el coste de una nueva central. Sin embargo, los ingresos obtenidos pueden incrementarse sustancialmente mediante su participación en los servicios de ajuste del sistema. Ello requeriría un diseño apropiado del mercado eléctrico. La contribución de las centrales hidráulicas reversibles al balance producción-consumo puede extenderse a las horas valle utilizando, bien bombeo en velocidad variable o bien una configuración de cortocircuito hidráulico. La necesidad de mitigar los efectos hidrológicos aguas abajo de las centrales hidroeléctricas puede introducir algunas restricciones en la operación que limitaría de algún modo los servicios descritos más arriba. Sin embargo, cabe esperar que los efectos ambientales provocados por las centrales hidráulicas reversibles sean significativamente menores. In this paper the role of hydropower in electric power systems is discussed, in the framework of the Spanish system, where a high penetration of intermittent power sources exists, showing a clear trend to increase in next years. The development of new hydro power facilities will be likely based on pumped storage hydro power plants. Hydropower is a mature and efficient technology for large-scale energy storage and therefore represents a key contribution for the integration of intermittent power sources, such as wind or photovoltaic. The benefits obtained from load shifting may be insufficient to compensate the costs of a new plant. However, the obtained revenues can significantly increase through its contribution to providing ancillary services. This would require an appropriate design of the electricity market. The contribution of pumped storage hydro power plants to balancing services can be extended to off-peak hours, using either variable speed pumping or the hydraulic shortcircuit configuration. The need to mitigate hydrological effects downstream of hydro plants may introduce some operational constraints which could limit to some extent the services described above. However environmental effects caused by pumped storage hydro power plants are expected to be significantly smaller.