907 resultados para Link variables method
Resumo:
The Houston region is home to arguably the largest petrochemical and refining complex anywhere. The effluent of this complex includes many potentially hazardous compounds. Study of some of these compounds has led to recognition that a number of known and probable carcinogens are at elevated levels in ambient air. Two of these, benzene and 1,3-butadiene, have been found in concentrations which may pose health risk for residents of Houston.^ Recent popular journalism and publications by local research institutions has increased the interest of the public in Houston's air quality. Much of the literature has been critical of local regulatory agencies' oversight of industrial pollution. A number of citizens in the region have begun to volunteer with air quality advocacy groups in the testing of community air. Inexpensive methods exist for monitoring of ozone, particulate matter and airborne toxic ambient concentrations. This study is an evaluation of a technique that has been successfully applied to airborne toxics.^ This technique, solid phase microextraction (SPME), has been used to measure airborne volatile organic hydrocarbons at community-level concentrations. It is has yielded accurate and rapid concentration estimates at a relatively low cost per sample. Examples of its application to measurement of airborne benzene exist in the literature. None have been found for airborne 1,3-butadiene. These compounds were selected for an evaluation of SPME as a community-deployed technique, to replicate previous application to benzene, to expand application to 1,3-butadiene and due to the salience of these compounds in this community. ^ This study demonstrates that SPME is a useful technique for quantification of 1,3-butadiene at concentrations observed in Houston. Laboratory background levels precluded recommendation of the technique for benzene. One type of SPME fiber, 85 μm Carboxen/PDMS, was found to be a sensitive sampling device for 1,3-butadiene under temperature and humidity conditions common in Houston. This study indicates that these variables affect instrument response. This suggests the necessity of calibration within specific conditions of these variables. While deployment of this technique was less expensive than other methods of quantification of 1,3-butadiene, the complexity of calibration may exclude an SPME method from broad deployment by community groups.^
Resumo:
This research examined to what extent Health Belief Model (HBM) and socioeconomic variables were useful in explaining the choice whether or not more effective contraceptive methods were used among married fecund women intending no additional births. The source of the data was the 1976 National Survey of Family Growth conducted under the auspices of the National Center for Health Statistics. Using the HBM as a framework for multivariate analyses limited support was found (using available measures) that the HBM components of motivation and perceived efficacy influence the likelihood of more effective contraceptive method use. Support was also found that modifying variables suggested by the HBM can influence the effects of HBM components on the likelihood of more effective method use. Socioeconomic variables were found, using all cases and some subgroups, to have a significant additional influence on the likelihood of use of more effective methods. Limited support was found for the concept that the greater the opportunity costs of an unwanted birth the greater the likelihood of use of more effective contraceptive methods. This research supports the use of HBM and socioeconomic variables to explain the likelihood of a protective health behavior, use of more effective contraception if no additional births are intended.^
Resumo:
This investigation compares two different methodologies for calculating the national cost of epilepsy: provider-based survey method (PBSM) and the patient-based medical charts and billing method (PBMC&BM). The PBSM uses the National Hospital Discharge Survey (NHDS), the National Hospital Ambulatory Medical Care Survey (NHAMCS) and the National Ambulatory Medical Care Survey (NAMCS) as the sources of utilization. The PBMC&BM uses patient data, charts and billings, to determine utilization rates for specific components of hospital, physician and drug prescriptions. ^ The 1995 hospital and physician cost of epilepsy is estimated to be $722 million using the PBSM and $1,058 million using the PBMC&BM. The difference of $336 million results from $136 million difference in utilization and $200 million difference in unit cost. ^ Utilization. The utilization difference of $136 million is composed of an inpatient variation of $129 million, $100 million hospital and $29 million physician, and an ambulatory variation of $7 million. The $100 million hospital variance is attributed to inclusion of febrile seizures in the PBSM, $−79 million, and the exclusion of admissions attributed to epilepsy, $179 million. The former suggests that the diagnostic codes used in the NHDS may not properly match the current definition of epilepsy as used in the PBMC&BM. The latter suggests NHDS errors in the attribution of an admission to the principal diagnosis. ^ The $29 million variance in inpatient physician utilization is the result of different per-day-of-care physician visit rates, 1.3 for the PBMC&BM versus 1.0 for the PBSM. The absence of visit frequency measures in the NHDS affects the internal validity of the PBSM estimate and requires the investigator to make conservative assumptions. ^ The remaining ambulatory resource utilization variance is $7 million. Of this amount, $22 million is the result of an underestimate of ancillaries in the NHAMCS and NAMCS extrapolations using the patient visit weight. ^ Unit cost. The resource cost variation is $200 million, inpatient is $22 million and ambulatory is $178 million. The inpatient variation of $22 million is composed of $19 million in hospital per day rates, due to a higher cost per day in the PBMC&BM, and $3 million in physician visit rates, due to a higher cost per visit in the PBMC&BM. ^ The ambulatory cost variance is $178 million, composed of higher per-physician-visit costs of $97 million and higher per-ancillary costs of $81 million. Both are attributed to the PBMC&BM's precise identification of resource utilization that permits accurate valuation. ^ Conclusion. Both methods have specific limitations. The PBSM strengths are its sample designs that lead to nationally representative estimates and permit statistical point and confidence interval estimation for the nation for certain variables under investigation. However, the findings of this investigation suggest the internal validity of the estimates derived is questionable and important additional information required to precisely estimate the cost of an illness is absent. ^ The PBMC&BM is a superior method in identifying resources utilized in the physician encounter with the patient permitting more accurate valuation. However, the PBMC&BM does not have the statistical reliability of the PBSM; it relies on synthesized national prevalence estimates to extrapolate a national cost estimate. While precision is important, the ability to generalize to the nation may be limited due to the small number of patients that are followed. ^
Resumo:
Geostrophic surface velocities can be derived from the gradients of the mean dynamic topography-the difference between the mean sea surface and the geoid. Therefore, independently observed mean dynamic topography data are valuable input parameters and constraints for ocean circulation models. For a successful fit to observational dynamic topography data, not only the mean dynamic topography on the particular ocean model grid is required, but also information about its inverse covariance matrix. The calculation of the mean dynamic topography from satellite-based gravity field models and altimetric sea surface height measurements, however, is not straightforward. For this purpose, we previously developed an integrated approach to combining these two different observation groups in a consistent way without using the common filter approaches (Becker et al. in J Geodyn 59(60):99-110, 2012, doi:10.1016/j.jog.2011.07.0069; Becker in Konsistente Kombination von Schwerefeld, Altimetrie und hydrographischen Daten zur Modellierung der dynamischen Ozeantopographie, 2012, http://nbn-resolving.de/nbn:de:hbz:5n-29199). Within this combination method, the full spectral range of the observations is considered. Further, it allows the direct determination of the normal equations (i.e., the inverse of the error covariance matrix) of the mean dynamic topography on arbitrary grids, which is one of the requirements for ocean data assimilation. In this paper, we report progress through selection and improved processing of altimetric data sets. We focus on the preprocessing steps of along-track altimetry data from Jason-1 and Envisat to obtain a mean sea surface profile. During this procedure, a rigorous variance propagation is accomplished, so that, for the first time, the full covariance matrix of the mean sea surface is available. The combination of the mean profile and a combined GRACE/GOCE gravity field model yields a mean dynamic topography model for the North Atlantic Ocean that is characterized by a defined set of assumptions. We show that including the geodetically derived mean dynamic topography with the full error structure in a 3D stationary inverse ocean model improves modeled oceanographic features over previous estimates.
Resumo:
Euphausiids constitute major biomass component in shelf ecosystems and play a fundamental role in the rapid vertical transport of carbon from the ocean surface to the deeper layers during their daily vertical migration (DVM). DVM depth and migration patterns depend on oceanographic conditions with respect to temperature, light and oxygen availability at depth, factors that are highly dependent on season in most marine regions. Changes in the abiotic conditions also shape Euphausiid metabolism including aerobic and anaerobic energy production. Here we introduce a global krill respiration model which includes the effect of latitude (LAT), the day of the year of interest (DoY), and the number of daylight hours on the day of interest (DLh), in addition to the basal variables that determine ectothermal oxygen consumption (temperature, body mass and depth) in the ANN model (Artificial Neural Networks). The newly implemented parameters link space and time in terms of season and photoperiod to krill respiration. The ANN model showed a better fit (r**2=0.780) when DLh and LAT were included, indicating a decrease in respiration with increasing LAT and decreasing DLh. We therefore propose DLh as a potential variable to consider when building physiological models for both hemispheres. We also tested for seasonality the standard respiration rate of the most common species that were investigated until now in a large range of DLh and DoY with Multiple Linear Regression (MLR) or General Additive model (GAM). GAM successfully integrated DLh (r**2= 0.563) and DoY (r**2= 0.572) effects on respiration rates of the Antarctic krill, Euphausia superba, yielding the minimum metabolic activity in mid-June and the maximum at the end of December. Neither the MLR nor the GAM approach worked for the North Pacific krill Euphausia pacifica, and MLR for the North Atlantic krill Meganyctiphanes norvegica remained inconclusive because of insufficient seasonal data coverage. We strongly encourage comparative respiration measurements of worldwide Euphausiid key species at different seasons to improve accuracy in ecosystem modelling.
Resumo:
(preliminary) Exchanges of carbon, water and energy between the land surface and the atmosphere are monitored by eddy covariance technique at the ecosystem level. Currently, the FLUXNET database contains more than 500 sites registered and up to 250 of them sharing data (Free Fair Use dataset). Many modelling groups use the FLUXNET dataset for evaluating ecosystem model's performances but it requires uninterrupted time series for the meteorological variables used as input. Because original in-situ data often contain gaps, from very short (few hours) up to relatively long (some months), we develop a new and robust method for filling the gaps in meteorological data measured at site level. Our approach has the benefit of making use of continuous data available globally (ERA-interim) and high temporal resolution spanning from 1989 to today. These data are however not measured at site level and for this reason a method to downscale and correct the ERA-interim data is needed. We apply this method on the level 4 data (L4) from the LaThuile collection, freely available after registration under a Fair-Use policy. The performances of the developed method vary across sites and are also function of the meteorological variable. On average overall sites, the bias correction leads to cancel from 10% to 36% of the initial mismatch between in-situ and ERA-interim data, depending of the meteorological variable considered. In comparison to the internal variability of the in-situ data, the root mean square error (RMSE) between the in-situ data and the un-biased ERA-I data remains relatively large (on average overall sites, from 27% to 76% of the standard deviation of in-situ data, depending of the meteorological variable considered). The performance of the method remains low for the Wind Speed field, in particular regarding its capacity to conserve a standard deviation similar to the one measured at FLUXNET stations.
Resumo:
Aim: Greater understanding of the processes underlying biological invasions is required to determine and predict invasion risk. Two subspecies of olive (Olea europaea subsp. europaea and Olea europaea subsp. cuspidata) have been introduced into Australia from the Mediterranean Basin and southern Africa during the 19th century. Our aim was to determine to what extent the native environmental niches of these two olive subspecies explain the current spatial segregation of the subspecies in their non-native range. We also assessed whether niche shifts had occurred in the non-native range, and examined whether invasion was associated with increased or decreased occupancy of niche space in the non-native range relative to the native range. Location: South-eastern Australia, Mediterranean Basin and southern Africa. Methods: Ecological niche models (ENMs) were used to quantify the similarity of native and non-native realized niches. Niche shifts were characterized by the relative contribution of niche expansion, stability and contraction based on the relative occupancy of environmental space by the native and non-native populations. Results: Native ENMs indicated that the spatial segregation of the two subspecies in their non-native range was partly determined by differences in their native niches. However, we found that environmentally suitable niches were less occupied in the non-native range relative to the native range, indicating that niche shifts had occurred through a contraction of the native niches after invasion, for both subspecies. Main conclusions: The mapping of environmental factors associated with niche expansion, stability or contraction allowed us to identify areas of greater invasion risk. This study provides an example of successful invasions that are associated with niche shifts, illustrating that introduced plant species are sometimes readily able to establish in novel environments. In these situations the assumption of niche stasis during invasion, which is implicitly assumed by ENMs, may be unreasonable.
Resumo:
ENVISAT ASAR WSM images with pixel size 150 × 150 m, acquired in different meteorological, oceanographic and sea ice conditions were used to determined icebergs in the Amundsen Sea (Antarctica). An object-based method for automatic iceberg detection from SAR data has been developed and applied. The object identification is based on spectral and spatial parameters on 5 scale levels, and was verified with manual classification in four polygon areas, chosen to represent varying environmental conditions. The algorithm works comparatively well in freezing temperatures and strong wind conditions, prevailing in the Amundsen Sea during the year. The detection rate was 96% which corresponds to 94% of the area (counting icebergs larger than 0.03 km**2), for all seasons. The presented algorithm tends to generate errors in the form of false alarms, mainly caused by the presence of ice floes, rather than misses. This affects the reliability since false alarms were manually corrected post analysis.
Resumo:
The Weddell Gyre plays a crucial role in the regulation of climate by transferring heat into the deep ocean through deep and bottom water mass formation. However, our understanding of Weddell Gyre water mass properties is limited to regions of data availability, primarily along the Prime Meridian. The aim is to provide a dataset of the upper water column properties of the entire Weddell Gyre. Objective mapping was applied to Argo float data in order to produce spatially gridded, time composite maps of temperature and salinity for fixed pressure levels ranging from 50 to 2000 dbar, as well as temperature, salinity and pressure at the level of the sub-surface temperature maximum. While the data are currently too limited to incorporate time into the gridded structure, the data are extensive enough to produce maps of the entire region across three time composite periods (2002-2005, 2006-2009 and 2010-2013), which can be used to determine how representative conclusions drawn from data collected along general RV transect lines are on a gyre scale perspective. The time composite data sets are provided as netCDF files; one for each time period. Mapped fields of conservative temperature, absolute salinity and potential density are provided for 41 vertical pressure levels. The above variables as well as pressure are provided at the level of the sub-surface temperature maximum. Corresponding mapping errors are also included in the netCDF files. Further details are provided in the global attributes, such as the unit variables and structure of the corresponding data array (i.e. latitude x longitude x vertical pressure level). In addition, all files ending in "_potTpSal" provide mapped fields of potential temperature and practical salinity.
Resumo:
Studies on the impact of historical, current and future global change require very high-resolution climate data (less or equal 1km) as a basis for modelled responses, meaning that data from digital climate models generally require substantial rescaling. Another shortcoming of available datasets on past climate is that the effects of sea level rise and fall are not considered. Without such information, the study of glacial refugia or early Holocene plant and animal migration are incomplete if not impossible. Sea level at the last glacial maximum (LGM) was approximately 125m lower, creating substantial additional terrestrial area for which no current baseline data exist. Here, we introduce the development of a novel, gridded climate dataset for LGM that is both very high resolution (1km) and extends to the LGM sea and land mask. We developed two methods to extend current terrestrial precipitation and temperature data to areas between the current and LGM coastlines. The absolute interpolation error is less than 1°C and 0.5 °C for 98.9% and 87.8% of all pixels for the first two 1 arc degree distance zones. We use the change factor method with these newly assembled baseline data to downscale five global circulation models of LGM climate to a resolution of 1km for Europe. As additional variables we calculate 19 'bioclimatic' variables, which are often used in climate change impact studies on biological diversity. The new LGM climate maps are well suited for analysing refugia and migration during Holocene warming following the LGM.
Resumo:
The geometries of a catchment constitute the basis for distributed physically based numerical modeling of different geoscientific disciplines. In this paper results from ground-penetrating radar (GPR) measurements, in terms of a 3D model of total sediment thickness and active layer thickness in a periglacial catchment in western Greenland, is presented. Using the topography, thickness and distribution of sediments is calculated. Vegetation classification and GPR measurements are used to scale active layer thickness from local measurements to catchment scale models. Annual maximum active layer thickness varies from 0.3 m in wetlands to 2.0 m in barren areas and areas of exposed bedrock. Maximum sediment thickness is estimated to be 12.3 m in the major valleys of the catchment. A method to correlate surface vegetation with active layer thickness is also presented. By using relatively simple methods, such as probing and vegetation classification, it is possible to upscale local point measurements to catchment scale models, in areas where the upper subsurface is relatively homogenous. The resulting spatial model of active layer thickness can be used in combination with the sediment model as a geometrical input to further studies of subsurface mass-transport and hydrological flow paths in the periglacial catchment through numerical modelling.
Resumo:
The estimation of the carbon dioxide (CO2) fluxes above the open ocean plays an important role for the determination of the global carbon cycle. A frequently used method therefore is the eddy-covariance technique, which is based on the theory of the Prandl-layer with height-constant fluxes in the atmospheric boundary layer. To test the assumption of the constant flux layer, in 2008 measurements of turbulent heat and CO2 fluxes were started within the project Surface Ocean Processes in the Anthropocene (SOPRAN) at the research platform FINO2. The FINO2 platform is situated in the South-west of the Baltic Sea, in the tri-border region between Germany, Denmark, and Sweden. In the frame of the Research project SOPRAN, the platform was equipped with additional sensors in June 2008. A combination of 3-component sonic anemometers (USA-1) and open-path infrared gas analyzers for absolute humidity (H2O) and CO2 (LICOR 7500) were installed at a 9m long boom directed southward of the platform in two heights, at 6.8 and 13.8m above sea surface. Additionally slow temperature and humidity sensors were installed at each height. The gas analyzer systems were calibrated before the installation and worked permanently without any calibration during the first measurement period of one and a half years. The comparison with the measurements of the slow sensors showed for both instruments no significant long-term drift in H2O and CO2. Drifts on smaller time scales (in the order of days) due to the contamination with sea salt, were cleaned naturally by rain. The drift of both quantities had no influence on the fluctuation, which, in contrast to the mean values, are important for the flux estimation. All data were filtered due to spikes, rain, and the influence of the mast. The data set includes the measurements of all sensors as average over 30 minutes each for one and a half years, June 2008 to December 2009, and 10 month from November 2011 to August 2012. Additionally derived quantities for 30 minutes intervals each, like the variances for the fast-sensor variables, as well as the momentum, sensible and latent heat, and CO2 flux are presented.
Resumo:
The research work as presented in this article covers the design of detached breakwaters since they constitute a type of coastal defence work with which to combat many of the erosion problems found on beaches in a stable, sustainable fashion. The main aim of this work is to formulate a functional and environmental (but not structural) design method, enabling the fundamental characteristics of a detached breakwater to be defined as a function of the effect it is wished to induce on the coast, and taking into account variables of a different nature (climate, geomorphology and geometry) influencing the changes the shoreline undergoes after its construction. With this article, it is intended to submit the final result of the investigation undertaken, applying the detached breakwater design method as developed to solving a practical case. Thus it may be shown how the method enables a detached breakwater’s geometric pre-sizing to be tackled at a place on the coast with certain climate, geomorphology and littoral dynamic characteristics, first setting the final state of equilibrium it is wanted to obtain therein after its construction.