789 resultados para Assessment and Variability
Resumo:
The Agenda 2030 contains 17 integrated Sustainable Development Goals (SDGs). SDG 12 for Sustainable Consumption and Production (SCP) promotes the efficient use of resources through a systemic change that decouples economic growth from environmental degradation. The Food Systems (FS) pillar in SDG 12 entails paramount relevance due to its interconnection to many other SDGs, and even when being a crucial world food supplier, the Latin American and Caribbean (LAC) Region struggles with environmental and social externalities, low investment in agriculture, inequity, food insecurity, poverty, and migration. Life Cycle Thinking (LCT) was regarded as a pertinent approach to identify hotspots and trade-offs, and support decision-making process to aid LAC Region countries as Costa Rica to diagnose sustainability and overcome certain challenges. This thesis aimed to ‘evaluate the sustainability of selected products from food supply chains in Costa Rica, to provide inputs for further sustainable decision-making, through the application of Life Cycle Thinking’. To do this, Life Cycle Assessment (LCA), Life Cycle Costing (LCC), and Social Life Cycle Assessment (S-LCA) evaluated the sustainability of food-waste-to-energy alternatives, and the production of green coffee, raw milk and leafy vegetables, and identified environmental, social and cost hotspots. This approach also proved to be a useful component of decision-making and policy-making processes together with other methods. LCT scientific literature led by LAC or Costa Rican researchers is still scarce; therefore, this research contributed to improve capacities in the use of LCT in this context, while offering potential replicability of the developed frameworks in similar cases. Main limitations related to the representativeness and availability of primary data; however, future research and extension activities are foreseen to increase local data availability, capacity building, and the discussion of potential integration through Life Cycle Sustainability Assessment (LCSA).
Resumo:
The use of environmental DNA (eDNA) analysis as a monitoring tool is becoming more and more widespread. The eDNA metabarcoding methods allow rapid community assessments of different target taxa. This work is focused on the validation of the environmental DNA metabarcoding protocol for biodiversity assessment of freshwater habitats. Scolo Dosolo was chosen as study area and three sampling points were defined for traditional and eDNA analyses. The gutter is a 205 m long anthropic canal located in Sala Bolognese (Bologna, Italy). Fish community and freshwater invertebrate metazoans were the target groups for the analysis. After a preliminary study in summer 2019, 2020 was devoted to the sampling campaign with winter (January), spring (May), summer (July) and autumn (October) surveys. Alongside with the water samplings for the eDNA study, also traditional fish surveys using the electrofishing technique were performed to assess fish community composition; census on invertebrates was performed using an entomological net and a surber sampler. After in silico analysis, the MiFish primer set amplifying a fragment of the 12s rRNA gene was selected for bony fishes. For invertebrates the FWHF2 + FWHR2N primer combination, that amplifies a region of the mitochondrial coi gene, was chosen. Raw reads were analyzed through a bioinformatic pipeline based on OBITools metabarcoding programs package and QIIME2. The OBITools pipeline retrieved seven fish taxa and 54 invertebrate taxa belonging to six different phyla, while QIIME2 recovered eight fish taxa and 45 invertebrate taxa belonging to the same six phyla as the OBITools pipeline. The metabarcoding results were then compared with the traditional surveys data and bibliographic records. Overall, the validated protocol provides a reliable picture of the biodiversity of the study area and an efficient support to the traditional methods.
Resumo:
The importance of vernacular architecture as an integral part of our cultural heritage is often undervalued, and the management of rural constructions of heritage value often requires the application of more flexible and adjusted preservation principles compared to monumental assets. For vernacular architecture, the preservation and consolidation concern not only their physical substance but also their intangible values and purpose in society. More than other heritage values, the vernacular raises the question of prospective functions it can fulfill in contemporary societies without undermining its legacy. This work analyzes the topic by studying the case study of the dry docks on the Balearic Island of Formentera, including abundant documentation of traditional construction techniques and materials realized through a field study, followed by the assessment of threats to and opportunities for vernacular architecture on Formentera, closing with suggestions for the maintenance and potential functions of the dry docks. For this, a comparative case study was introduced: the capanni da pesca at the Adriatic coast of Italy. The suggestions focus on the importance of maintenance for rural heritage, expressed through the creation of a guide for owners of protected dry docks highlighting good and bad practice examples and recurrent works of care. Furthermore, the thesis seeks to raise awareness for the significance of landscape and social factors in the discourse about vernacular heritage sites. Ultimately, different outlines of tourism development are proposed. The idea is to explore rural and sustainable tourism as a tool for territory development and enhancement of the cultural heritage value by helping to prevent the destination's decline through careful evaluation of its limits of acceptable change and identifying beneficial, sustainable scenarios for the future of the heritage asset and the respective community.
Resumo:
Nowadays the urgency to address climate change and global warming is growing rapidly: the industry and the energy sector must be decarbonized. Hydrogen can play a key role in the energy transition: it is expected to progressively replace fossil fuels, penetrating economies and gaining interest from the public. However, this new possible energy scenario requires further investigation on safety aspects, which currently represent a challenge. The present study aims at making a little contribution to this field. The focus is on the analysis and modeling of hazardous scenarios concerning liquid hydrogen. The investigation of BLEVEs (Boiling Liquid Expanding Vapor Explosion) consequences lies at the core of this research: among various consequences (overpressure, radiation), the interest is on the generation and projection of fragments. The goal is to investigate whether the models developed for conventional fuels and tanks give good predictions also when handling hydrogen. The experimental data from the SH2IFT - Safe Hydrogen Fuel Handling and Use for Efficient Implementation project are used to validate those models. This project’s objective was to increase competence within safety of hydrogen technology, especially focusing on consequences of handling large amounts of this substance.
Resumo:
Objectives: Quantitative ultrasound (QUS) is an attractive method for assessing fracture risk because it is portable, inexpensive, without ionizing radiation, and available in areas of the world where DXA is not readily accessible or affordable. However, the diversity of QUS scanners and variability of fracture outcomes measured in different studies is an important obstacle to widespread utilisation of QUS for fracture risk assessment. We aimed in this review to assess the predictive power of heel QUS for fractures considering different characteristics of the association (QUS parameters and fracture outcomes measured, QUS devices, study populations, and independence from DXA-measured bone density).Materials/Methods : We conducted an inverse-variance randomeffects meta-analysis of prospective studies with heel QUS measures at baseline and fracture outcomes in their follow-up. Relative risks (RR) per standard deviation (SD) of different QUS parameters (broadband ultrasound attenuation [BUA], speed of sound &SOS;, stiffness index &SI;, and quantitative ultrasound index [QUI]) for various fracture outcomes (hip, vertebral, any clinical, any osteoporotic, and major osteoporotic fractures) were reported based on study questions.Results : 21 studies including 55,164 women and 13,742 men were included with a total follow-up of 279,124 person-years. All four QUS parameters were associated with risk of different fractures. For instance, RR of hip fracture for 1 SD decrease of BUA was 1.69 (95% CI 1.43-2.00), SOS was 1.96 (95% CI 1.64-2.34), SI was 2.26 (95%CI 1.71-2.99), and QUI was 1.99 (95% CI 1.49-2.67). Validated devices from different manufacturers predicted fracture risks with a similar performance (meta-regression p-values>0.05 for difference of devices). There was no sign of publication bias among the studies. QUS measures predicted fracture with a similar performance in men and women. Meta-analysis of studies with QUS measures adjusted for hip DXA showed a significant and independent association with fracture risk (RR/SD for BUA =1.34 [95%CI 1.22-1.49]).Conclusions : This study confirms that QUS of the heel using validated devices predicts risk of different fracture outcomes in elderly men and women. Further research and international collaborations are needed for standardisation of QUS parameters across various manufacturers and inclusion of QUS in fracture risk assessment tools. Disclosure of Interest : None declared.
Resumo:
Simulations of the stratosphere from thirteen coupled chemistry-climate models (CCMs) are evaluated to provide guidance for the interpretation of ozone predictions made by the same CCMs. The focus of the evaluation is on how well the fields and processes that are important for determining the ozone distribution are represented in the simulations of the recent past. The core period of the evaluation is from 1980 to 1999 but long-term trends are compared for an extended period (1960–2004). Comparisons of polar high-latitude temperatures show that most CCMs have only small biases in the Northern Hemisphere in winter and spring, but still have cold biases in the Southern Hemisphere spring below 10 hPa. Most CCMs display the correct stratospheric response of polar temperatures to wave forcing in the Northern, but not in the Southern Hemisphere. Global long-term stratospheric temperature trends are in reasonable agreement with satellite and radiosonde observations. Comparisons of simulations of methane, mean age of air, and propagation of the annual cycle in water vapor show a wide spread in the results, indicating differences in transport. However, for around half the models there is reasonable agreement with observations. In these models the mean age of air and the water vapor tape recorder signal are generally better than reported in previous model intercomparisons. Comparisons of the water vapor and inorganic chlorine (Cly) fields also show a large intermodel spread. Differences in tropical water vapor mixing ratios in the lower stratosphere are primarily related to biases in the simulated tropical tropopause temperatures and not transport. The spread in Cly, which is largest in the polar lower stratosphere, appears to be primarily related to transport differences. In general the amplitude and phase of the annual cycle in total ozone is well simulated apart from the southern high latitudes. Most CCMs show reasonable agreement with observed total ozone trends and variability on a global scale, but a greater spread in the ozone trends in polar regions in spring, especially in the Arctic. In conclusion, despite the wide range of skills in representing different processes assessed here, there is sufficient agreement between the majority of the CCMs and the observations that some confidence can be placed in their predictions.
Resumo:
Water security which is essential to life and livelihood, health and sanitation, is determined not only by the water resource, but also by the quality of water, the ability to store surplus from precipitation and runoff, as well as access to and affordability of supply. All of these measures have financial implications for national budgets. The water sector in the context of the assessment and discussion on the impact of climate change in this paper includes consideration of the existing as well as the projected available water resource and the demand in terms of: quantity and quality of surface and ground water, water supply infrastructure - collection, storage, treatment, distribution, and potential for adaptation. Wastewater management infrastructure is also considered a component of the water sector. Saint Vincent and the Grenadines has two distinct hydrological regimes: mainland St Vincent is one of the wetter islands of the eastern Caribbean whereas the Grenadines have a drier climate than St Vincent. Surface water is the primary source of water supply on St Vincent, whereas the Grenadines depend on man-made catchments, rainwater harvesting, wells, and desalination. The island state is considered already water stressed as marked seasonality in rainfall, inadequate supply infrastructure, and institutional capacity constrains water supply. Economic modelling approaches were implemented to estimate sectoral demand and supply between 2011 and 2050. Residential, tourism and domestic demand were analysed for the A2, B2 and BAU scenarios. In each of the three scenarios – A2, B2 and BAU Saint Vincent and the Grenadines will have a water gap represented by the difference between the two curves during the forecast period of 2011 and 2050. The amount of water required increases steadily between 2011 and 2050 implying an increasing demand on the country‘s resources as reflected by the fact that the water supply that is available cannot respond adequately to the demand. The Global Water Partnership in its 2005 policy brief suggested that the best way for countries to build the capacity to adapt to climate change will be to improve their ability to cope with today‘s climate variability (GWP, 2005). This suggestion is most applicable for St Vincent and the Grenadines, as the variability being experienced has already placed the island nation under water stress. Strategic priorities should therefore be adopted to increase water production, increase efficiency, strengthen the institutional framework, and decrease wastage. Cost benefit analysis was stymied by data availability, but the ―no-regrets approach‖ which intimates that adaptation measures will be beneficial to the land, people and economy of Saint Vincent and the Grenadines with or without climate change should be adopted.
Resumo:
Coronary heart disease remains the leading cause of death in the United States and increased blood cholesterol level has been found to be a major risk factor with roots in childhood. Tracking of cholesterol, i.e., the tendency to maintain a particular cholesterol level relative to the rest of the population, and variability in blood lipid levels with increase in age have implications for cholesterol screening and assessment of lipid levels in children for possible prevention of further rise to prevent adulthood heart disease. In this study the pattern of change in plasma lipids, over time, and their tracking were investigated. Also, within-person variance and retest reliability defined as the square root of within-person variance for plasma total cholesterol, HDL-cholesterol, LDL-cholesterol, and triglycerides and their relation to age, sex and body mass index among participants from age 8 to 18 years were investigated. ^ In Project HeartBeat!, 678 healthy children aged 8, 11 and 14 years at baseline were enrolled and examined at 4-monthly intervals for up to 4 years. We examined the relationship between repeated observations by Pearson's correlations. Age- and sex-specific quintiles were calculated and the probability of participants to remain in the uppermost quintile of their respective distribution was evaluated with life table methods. Plasma total cholesterol, HDL-C and LDL-C at baseline were strongly and significantly correlated with measurements at subsequent visits across the sex and age groups. Plasma triglyceride at baseline was also significantly correlated with subsequent measurements but less strongly than was the case for other plasma lipids. The probability to remain in the upper quintile was also high (60 to 70%) for plasma total cholesterol, HDL-C and LDL-C. ^ We used a mixed longitudinal, or synthetic cohort design with continuous observations from age 8 to 18 years to estimate within person variance of plasma total cholesterol, HDL-C, LDL-C and triglycerides. A total of 5809 measurements were available for both cholesterol and triglycerides. A multilevel linear model was used. Within-person variance among repeated measures over up to four years of follow-up was estimated for total cholesterol, HDL-C, LDL-C and triglycerides separately. The relationship of within-person and inter-individual variance with age, sex, and body mass index was evaluated. Likelihood ratio tests were conducted by calculating the deviation of −2log (likelihood) within the basic model and alternative models. The square root of within-person variance provided the retest reliability (within person standard deviation) for plasma total cholesterol, HDL-C, LDL-C and triglycerides. We found 13.6 percent retest reliability for plasma cholesterol, 6.1 percent for HDL-cholesterol, 11.9 percent for LDL-cholesterol and 32.4 percent for triglycerides. Retest reliability of plasma lipids was significantly related with age and body mass index. It increased with increase in body mass index and age. These findings have implications for screening guidelines, as participants in the uppermost quintile tended to maintain their status in each of the age groups during a four-year follow-up. The magnitude of within-person variability of plasma lipids influences the ability to classify children into risk categories recommended by the National Cholesterol Education Program. ^
Resumo:
The normalised difference vegetation index (NDVI) has evolved as a primary tool for monitoring continental-scale vegetation changes and interpreting the impact of short to long-term climatic events on the biosphere. The objective of this research was to assess the nature of relationships between precipitation and vegetation condition, as measured by the satellite-derived NDVI within South Australia. The correlation, timing and magnitude of the NDVI response to precipitation were examined for different vegetation formations within the State (forest, scrubland, shrubland, woodland and grassland). Results from this study indicate that there are strong relationships between precipitation and NDVI both spatially and temporally within South Australia. Differences in the timing of the NDVI response to precipitation were evident among the five vegetation formations. The most significant relationship between rainfall and NDVI was within the forest formation. Negative correlations between NDVI and precipitation events indicated that vegetation green-up is a result of seasonal patterns in precipitation. Spatial patterns in the average NDVI over the study period closely resembled the boundaries of the five classified vegetation formations within South Australia. Spatial variability within the NDVI data set over the study period differed greatly between and within the vegetation formations examined depending on the location within the state. ACRONYMS AVHRR Advanced Very High Resolution Radiometer ENVSAEnvironments of South Australia EOS Terra-Earth Observing System EVIEnhanced Vegetation Index MODIS Moderate Resolution Imaging Spectro-radiometer MVC Maximum Value Composite NDVINormalised Difference Vegetation Index NIRNear Infra-Red NOAANational Oceanic and Atmospheric Administration SPOT Systeme Pour l’Observation de la Terre. [ABSTRACT FROM AUTHOR]
Resumo:
How can empirical evidence of adverse effects from exposure to noxious agents, which is often incomplete and uncertain, be used most appropriately to protect human health? We examine several important questions on the best uses of empirical evidence in regulatory risk management decision-making raised by the US Environmental Protection Agency (EPA)'s science-policy concerning uncertainty and variability in human health risk assessment. In our view, the US EPA (and other agencies that have adopted similar views of risk management) can often improve decision-making by decreasing reliance on default values and assumptions, particularly when causation is uncertain. This can be achieved by more fully exploiting decision-theoretic methods and criteria that explicitly account for uncertain, possibly conflicting scientific beliefs and that can be fully studied by advocates and adversaries of a policy choice, in administrative decision-making involving risk assessment. The substitution of decision-theoretic frameworks for default assumption-driven policies also allows stakeholder attitudes toward risk to be incorporated into policy debates, so that the public and risk managers can more explicitly identify the roles of risk-aversion or other attitudes toward risk and uncertainty in policy recommendations. Decision theory provides a sound scientific way explicitly to account for new knowledge and its effects on eventual policy choices. Although these improvements can complicate regulatory analyses, simplifying default assumptions can create substantial costs to society and can prematurely cut off consideration of new scientific insights (e.g., possible beneficial health effects from exposure to sufficiently low 'hormetic' doses of some agents). In many cases, the administrative burden of applying decision-analytic methods is likely to be more than offset by improved effectiveness of regulations in achieving desired goals. Because many foreign jurisdictions adopt US EPA reasoning and methods of risk analysis, it may be especially valuable to incorporate decision-theoretic principles that transcend local differences among jurisdictions.
Mapping olive varieties and within-field spatial variability using high resolution quickbird imagery
Resumo:
Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.
In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.
Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.
Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.
Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.
To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.
The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.
This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.
Resumo:
One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.
Resumo:
Lates calcarifer supports important fisheries throughout tropical Australia. Community-driven fish stocking has resulted in the creation of impoundment fisheries and supplemental stocking of selected wild riverine populations. Using predominantly tag-recapture methods, condition assessment and stomach flushing techniques, this study compared the growth of stocked and wild L. calcarifer in a tropical Australian river (Johnstone River) and stocked fish in a nearby impoundment (Lake Tinaroo). Growth of L. calcarifer in the Johnstone River appeared resource-limited, with juvenile fish in its lower freshwater reaches feeding mainly on small aytid shrimp and limited quantities of fish. Growth was probably greatest in estuarine and coastal areas than in the lower freshwater river. Fish in Lake Tinaroo, where prey availability was greater, grew faster than either wild or stocked fish in the lower freshwater areas of the Johnstone River. Growth of L. calcarifer was highly seasonal with marked declines in the cooler months. This was reflected in both stomach fullness and the percentage of fish with empty stomachs but the condition of L. calcarifer was similar across most sites. In areas where food resources appear stretched, adverse effects on resident L. calcarifer populations and their attendant prey species should be minimised through cessation of, or more conservative, stocking practices.