953 resultados para Prediction Models for Air Pollution
Resumo:
Aims Climate and human impacts are changing the nitrogen (N) inputs and losses in terrestrial ecosystems. However, it is largely unknown how these two major drivers of global change will simultaneously influence the N cycle in drylands, the largest terrestrial biome on the planet. We conducted a global observational study to evaluate how aridity and human impacts, together with biotic and abiotic factors, affect key soil variables of the N cycle. Location Two hundred and twenty-four dryland sites from all continents except Antarctica widely differing in their environmental conditions and human influence. Methods Using a standardized field survey, we measured aridity, human impacts (i.e. proxies of land uses and air pollution), key biophysical variables (i.e. soil pH and texture and total plant cover) and six important variables related to N cycling in soils: total N, organic N, ammonium, nitrate, dissolved organic:inorganic N and N mineralization rates. We used structural equation modelling to assess the direct and indirect effects of aridity, human impacts and key biophysical variables on the N cycle. Results Human impacts increased the concentration of total N, while aridity reduced it. The effects of aridity and human impacts on the N cycle were spatially disconnected, which may favour scarcity of N in the most arid areas and promote its accumulation in the least arid areas. Main conclusions We found that increasing aridity and anthropogenic pressure are spatially disconnected in drylands. This implies that while places with low aridity and high human impact accumulate N, most arid sites with the lowest human impacts lose N. Our analyses also provide evidence that both increasing aridity and human impacts may enhance the relative dominance of inorganic N in dryland soils, having a negative impact on key functions and services provided by these ecosystems.
Resumo:
Survivors of childhood cancer carry a substantial burden of morbidity and are at increased risk for premature death. Furthermore, clear associations exist between specific therapeutic exposures and the risk for a variety of long-term complications. The entire landscape of health issues encountered for decades after successful completion of treatment is currently being explored in various collaborative research settings. These settings include large population-based or multi-institutional cohorts and single-institution studies. The ascertainment of outcomes has depended on self-reporting, linkage to registries, or clinical assessments. Survivorship research in the cooperative group setting, such as the Children's Oncology Group, has leveraged the clinical trials infrastructure to explore the molecular underpinnings of treatment-related adverse events, and to understand specific complications in the setting of randomized risk-reduction strategies. This review highlights the salient findings from these large collaborative initiatives, emphasizing the need for life-long follow-up of survivors of childhood cancer, and describing the development of several guidelines and efforts toward harmonization. Finally, the review reinforces the need to identify populations at highest risk, facilitating the development of risk prediction models that would allow for targeted interventions across the entire trajectory of survivorship.
Resumo:
Fine carbonaceous aerosols (CAs) is the key factor influencing the currently filthy air in megacities in China, yet few studies simultaneously focus on the origins of different CAs species using specific and powerful source tracers. Here, we present a detailed source apportionment for various CAs fractions, including organic carbon (OC), water-soluble OC (WSOC), water-insoluble OC (WIOC), elemental carbon (EC) and secondary OC (SOC) in the largest cities of North (Beijing, BJ) and South China (Guangzhou, GZ), using the measurements of radiocarbon and anhydrosugars. Results show that non-fossil fuel sources such as biomass burning and biogenic emission make a significant contribution to the total CAs in Chinese megacities: 56±4 in BJ and 46±5% in GZ, respectively. The relative contributions of primary fossil carbon from coal and liquid petroleum combustions, primary non-fossil carbon and secondary organic carbon (SOC) to total carbon are 19, 28 and 54% in BJ, and 40, 15 and 46% in GZ, respectively. Non-fossil fuel sources account for 52 in BJ and 71% in GZ of SOC, respectively. These results suggest that biomass burning has a greater influence on regional particulate air pollution in North China than in South China. We observed an unabridged haze bloom-decay process in South China, which illustrates that both primary and secondary matter from fossil sources played a key role in the blooming phase of the pollution episode, while haze phase is predominantly driven by fossil-derived secondary organic matter and nitrate.
Resumo:
The European Eye Epidemiology (E3) consortium is a recently formed consortium of 29 groups from 12 European countries. It already comprises 21 population-based studies and 20 other studies (case-control, cases only, randomized trials), providing ophthalmological data on approximately 170,000 European participants. The aim of the consortium is to promote and sustain collaboration and sharing of data and knowledge in the field of ophthalmic epidemiology in Europe, with particular focus on the harmonization of methods for future research, estimation and projection of frequency and impact of visual outcomes in European populations (including temporal trends and European subregions), identification of risk factors and pathways for eye diseases (lifestyle, vascular and metabolic factors, genetics, epigenetics and biomarkers) and development and validation of prediction models for eye diseases. Coordinating these existing data will allow a detailed study of the risk factors and consequences of eye diseases and visual impairment, including study of international geographical variation which is not possible in individual studies. It is expected that collaborative work on these existing data will provide additional knowledge, despite the fact that the risk factors and the methods for collecting them differ somewhat among the participating studies. Most studies also include biobanks of various biological samples, which will enable identification of biomarkers to detect and predict occurrence and progression of eye diseases. This article outlines the rationale of the consortium, its design and presents a summary of the methodology.
Resumo:
Chronic respiratory illnesses are a significant cause of morbidity and mortality, and acute changes in respiratory function often lead to hospitalization. Air pollution is known to exacerbate asthma, but the molecular mechanisms of this are poorly understood. The current studies were aimed at clarifying the roles of nerve subtypes and purinergic receptors in respiratory reflex responses following exposure to irritants. In C57Bl/6J female mice, inspired adenosine produced sensory irritation, shown to be mediated mostly by A-delta fibers. Secondly, the response to inhaled acetic acid was discovered to be dually influenced by C and A-delta fibers, as indicated by the observed effects of capsaicin pretreatment, which selectively destroys TRPV1-expressing fibers (mostly C fibers) and pretreatment with theophylline, a nonselective adenosine receptor antagonist. The responses to both adenosine and acetic acid were enhanced in the ovalbumin-allergic airway disease model, although the particular pathway altered is still unknown.
Resumo:
Objectives. Predict who will develop a dissection. To create male and female prediction models using the risk factors: age, ethnicity, hypertension, high cholesterol, smoking, alcohol use, diabetes, heart attack, congestive heart failure, congenital and non-congenital heart disease, Marfan syndrome, and bicuspid aortic valve. ^ Methods. Using 572 patients diagnosed with aortic aneurysms, a model was developed for each of males and females using 80% of the data and then verified using the remaining 20% of the data. ^ Results. The male model predicted the probability of a male in having a dissection (p=0.076) and the female model predicted the probability of a female in having a dissection (p=0.054). The validation models did not support the choice of the developmental models. ^ Conclusions. The best models obtained suggested that those who are at a greater risk of having a dissection are males with non-congenital heart disease and who drink alcohol, and females with non-congenital heart disease and bicuspid aortic valve.^
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
It is well accepted that tumorigenesis is a multi-step procedure involving aberrant functioning of genes regulating cell proliferation, differentiation, apoptosis, genome stability, angiogenesis and motility. To obtain a full understanding of tumorigenesis, it is necessary to collect information on all aspects of cell activity. Recent advances in high throughput technologies allow biologists to generate massive amounts of data, more than might have been imagined decades ago. These advances have made it possible to launch comprehensive projects such as (TCGA) and (ICGC) which systematically characterize the molecular fingerprints of cancer cells using gene expression, methylation, copy number, microRNA and SNP microarrays as well as next generation sequencing assays interrogating somatic mutation, insertion, deletion, translocation and structural rearrangements. Given the massive amount of data, a major challenge is to integrate information from multiple sources and formulate testable hypotheses. This thesis focuses on developing methodologies for integrative analyses of genomic assays profiled on the same set of samples. We have developed several novel methods for integrative biomarker identification and cancer classification. We introduce a regression-based approach to identify biomarkers predictive to therapy response or survival by integrating multiple assays including gene expression, methylation and copy number data through penalized regression. To identify key cancer-specific genes accounting for multiple mechanisms of regulation, we have developed the integIRTy software that provides robust and reliable inferences about gene alteration by automatically adjusting for sample heterogeneity as well as technical artifacts using Item Response Theory. To cope with the increasing need for accurate cancer diagnosis and individualized therapy, we have developed a robust and powerful algorithm called SIBER to systematically identify bimodally expressed genes using next generation RNAseq data. We have shown that prediction models built from these bimodal genes have the same accuracy as models built from all genes. Further, prediction models with dichotomized gene expression measurements based on their bimodal shapes still perform well. The effectiveness of outcome prediction using discretized signals paves the road for more accurate and interpretable cancer classification by integrating signals from multiple sources.
Resumo:
Maximizing data quality may be especially difficult in trauma-related clinical research. Strategies are needed to improve data quality and assess the impact of data quality on clinical predictive models. This study had two objectives. The first was to compare missing data between two multi-center trauma transfusion studies: a retrospective study (RS) using medical chart data with minimal data quality review and the PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study with standardized quality assurance. The second objective was to assess the impact of missing data on clinical prediction algorithms by evaluating blood transfusion prediction models using PROMMTT data. RS (2005-06) and PROMMTT (2009-10) investigated trauma patients receiving ≥ 1 unit of red blood cells (RBC) from ten Level I trauma centers. Missing data were compared for 33 variables collected in both studies using mixed effects logistic regression (including random intercepts for study site). Massive transfusion (MT) patients received ≥ 10 RBC units within 24h of admission. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation based on the multivariate normal distribution. A sensitivity analysis for missing data was conducted to estimate the upper and lower bounds of correct classification using assumptions about missing data under best and worst case scenarios. Most variables (17/33=52%) had <1% missing data in RS and PROMMTT. Of the remaining variables, 50% demonstrated less missingness in PROMMTT, 25% had less missingness in RS, and 25% were similar between studies. Missing percentages for MT prediction variables in PROMMTT ranged from 2.2% (heart rate) to 45% (respiratory rate). For variables missing >1%, study site was associated with missingness (all p≤0.021). Survival time predicted missingness for 50% of RS and 60% of PROMMTT variables. MT models complete case proportions ranged from 41% to 88%. Complete case analysis and multiple imputation demonstrated similar correct classification results. Sensitivity analysis upper-lower bound ranges for the three MT models were 59-63%, 36-46%, and 46-58%. Prospective collection of ten-fold more variables with data quality assurance reduced overall missing data. Study site and patient survival were associated with missingness, suggesting that data were not missing completely at random, and complete case analysis may lead to biased results. Evaluating clinical prediction model accuracy may be misleading in the presence of missing data, especially with many predictor variables. The proposed sensitivity analysis estimating correct classification under upper (best case scenario)/lower (worst case scenario) bounds may be more informative than multiple imputation, which provided results similar to complete case analysis.^
Resumo:
Esta monografía se enmarca en el manejo de los recursos hídricos en grandes redes de riego. En ella se describe el caso del río Mendoza, en la provincia homónima, el que fuera regulado en el año 2002. Este río nace en la Cordillera de los Andes, y presenta un importante arrastre de sólidos en suspensión, los que actualmente son retenidos en gran medida por el embalse Potrerillos. Las “aguas claras" que se erogan del embalse producen problemas erosivos, los que a su vez estarían ocasionando una mayor infiltración en los canales, y con ello un incremento en la recarga de acuíferos en ciertas zonas, así como problemas derivados del ascenso de la freática en otras. Se citan procesos ocurridos en otros distritos de riego frente a la regulación de los ríos, para concluir que el del río Mendoza es un caso susceptible de sufrir ciertos per-juicios, ya señalados en la Manifestación General de Impacto Ambiental del embalse Potrerillos, los que actualmente se están presentando en la red de riego. A partir de los estudios de sedimentología en el río Mendoza, se hace un análisis técnico de los fenómenos asociados al cambio de las características físicas del agua. Luego se describen los procesos erosivos, de acuerdo con la hidráulica clásica. Se define la Eficiencia de conducción (Ec), la infiltración en canales y su importancia en distintos distritos de riego, para luego mencionar los estudios realizados en el área del río Mendoza. Se analiza el desarrollo espacial que ha tenido el oasis, la escasa programación que tuvo su traza y la antigüedad de la misma. La descripción de los suelos permite concluir acerca de la importancia de su estructura y del papel que juegan las porciones finas, aún en minoría, que integran las distintas clases texturales con respecto a la Ec. Se describen los criterios con que se distribuye el agua en Mendoza, analizándose los caudales distribuidos actualmente, para relacionarlos con los niveles freáticos. Se mencionan además distintas acciones encaradas por la provincia para mitigar los efectos de las aguas claras. El análisis de los métodos utilizados para medir la Ec, permite apreciar el estado de la ciencia al respecto. Un análisis de las ventajas y de las desventajas de los distintos métodos, y de los resultados que con ellos se obtienen, permite concluir que el método de entradas y salidas es el que mejor se adapta en Mendoza, incluyendo además aspectos metodológicos de la medición. También se concluye en que la Ec. está insuficientemente evaluada; las fracciones finas de los suelos en muchos casos gravitan más que la textura frente a la Ec; por ello, se considera que el estudio de la Ec en las distintas áreas de manejo es necesario para entender los procesos de revenición y recarga de acuíferos, y que las pérdidas administrativas pueden gravitar más que la Ec. Se recomienda continuar con los trabajos de evaluación de Ec, al ser necesarios para todas las actividades en la cuenca; se desaconseja en este río el ajuste de modelos de predicción de Ec; las características de los suelos obligan a interpretar y aplicar con criterio la bibliografía internacional, pero aún así no se pueden hacer generalizaciones acerca de de la Ec en Mendoza.
Resumo:
Los residuos del sector avícola, principalmente guano (aves ponedoras) y cama de parrilleros (aves de engorde), pueden generar un impacto negativo en el ambiente contribuyendo a la contaminación de suelo, agua y aire. La estabilización aeróbica a través del compostaje es una alternativa de tratamiento para reducir la contaminación. El objetivo de este trabajo fue evaluar el proceso de compostaje en dos mezclas con diferentes porcentajes de residuos avícolas (guano de aves ponedoras y cama de pollos parrilleros). Se compostaron dos mezclas que contenían 81% y 70% de residuos avícolas durante 16 semanas. Las variables analizadas fueron: temperatura (T°), pH, conductividad eléctrica (CE), humedad (H), capacidad de intercambio catiónico (CIC), carbono orgánico total (COT), amonio (NH4+), nitrato (NO3 - ), nitrógeno total (NT ) y carbono soluble (CS). Las características finales de los compost A y B fueron: pH 7,1 - 6,8, CE 3,3 - 2,9 (mS. cm- 1), COT 14,8 - 17,9 %, NT 0,97 - 0,88 %, NH4 + 501 - 144,9 mg kg-1, NO3-552,3 - 543,0 mg kg-1 respectivamente. El proceso de compostaje podría ser una herramienta para estabilizar los residuos avícolas minimizando su impacto en el ambiente.
Resumo:
Urban forest health was surveyed on Roznik in Ljubljana (46.05141 N, 14.47797 E) in 2013 by two methods: ICP Forests and UFMO. ICP Forests is most commonly used monitoring programme in Europe - the International Co-operative Programme on the Assessment and Monitoring of Air Pollution Effects on Forests, which is based on systematic grid. UFMO method - Urban Forests Management Oriented method was developed in the frame of EMoNFUr Project - Establishing a monitoring network to assess lowland forest and urban plantations in Lombardy and urban forest in Slovenia (LIFE10 ENV/IT/000399). UFMO is based on non-linear transects (GPS tracks). ICP forests monitoring plots were established in July 2013 in the urban forest Roznik in Ljubljana .The 32 plots are located on sampling grid 500 × 500 m. The grid was down-scaled from the National Forest Monitoring survey, which bases on national sample grid 4 × 4 km. With the ICP forests method the following parameters for each tree within the 15 plots were gathered according to the ICP forests manual for Visual assessment of crown condition and damaging agents: tree species, percentage of defoliation, affected part of the tree, specification of affected part, location in crown, symptom, symptom specification, causal agents / factors, age of damage, damage extent, and damage extent on the trunk. With the UFMO method, the following parameters for each tree that needed sylviculture measure (felling, pruning, sanitary felling, thinning, etc.) were recorded: tree species, breast diameter, causal agent / damaging factor, GPS waypoint and GPS track. For overall picture in the urban forest health problems, also other biotic and abiotic damaging factors that did not require management action were recorded.
Resumo:
This paper sheds light on the iron and steel (IS) scrap trade to examine how economic development affects the quality demanded of recyclable resource. A simple model is presented that show a mechanism of how scrap quality impacts the direction of trade due to comparative advantage. We find that economic development in both importing and exporting countries has a positive effect on the quality of traded recyclables. Developed countries that intend to improve the domestic recovery of recyclables should raise the quality of separating recyclables while developing countries should tighten environmental regulations to help decrease the import of recyclables that cause pollution.
Resumo:
This paper integrates two lines of research into a unified conceptual framework: trade in global value chains and embodied emissions. This allows both value added and emissions to be systematically traced at the country, sector, and bilateral levels through various production network routes. By combining value-added and emissions accounting in a consistent way, the potential environmental cost (amount of emissions per unit of value added) along global value chains can be estimated. Using this unified accounting method, we trace CO2 emissions in the global production and trade network among 41 economies in 35 sectors from 1995 to 2009, basing our calculations on the World Input–Output Database, and show how they help us to better understand the impact of cross-country production sharing on the environment.