953 resultados para Prediction Models for Air Pollution
Resumo:
Insulin-like growth factor type 1 (IGF1) is a mediator of growth hormone (GH) action, and therefore, IGF1 is a candidate gene for recombinant human GH (rhGH) pharmacogenetics. Lower serum IGF1 levels were found in adults homozygous for 19 cytosine-adenosine (CA) repeats in the IGF1 promoter. The aim of this study was to evaluate the influence of (CA)n IGF1 polymorphism, alone or in combination with GH receptor (GHR)-exon 3 and -202 A/C insulin-like growth factor binding protein-3 (IGFBP3) polymorphisms, on the growth response to rhGH therapy in GH-deficient (GHD) patients. Eighty-four severe GHD patients were genotyped for (CA) n IGF1, -202 A/C IGFBP3 and GHR-exon 3 polymorphisms. Multiple linear regressions were performed to estimate the effect of each genotype, after adjustment for other influential factors. We assessed the influence of genotypes on the first year growth velocity (1st y GV) (n = 84) and adult height standard deviation score (SDS) adjusted for target-height SDS (AH-TH SDS) after rhGH therapy (n = 37). Homozygosity for the IGF1 19CA repeat allele was negatively correlated with 1st y GV (P = 0.03) and AH-TH SDS (P = 0.002) in multiple linear regression analysis. In conjunction with clinical factors, IGF1 and IGFBP3 genotypes explain 29% of the 1st y GV variability, whereas IGF1 and GHR polymorphisms explain 59% of final height-target-height SDS variability. We conclude that homozygosity for IGF1 (CA) 19 allele is associated with less favorable short-and long-term growth outcomes after rhGH treatment in patients with severe GHD. Furthermore, this polymorphism exhibits a non-additive interaction with -202 A/C IGFBP3 genotype on the 1st y GV and with GHR-exon 3 genotype on adult height. The Pharmacogenomics Journal (2012) 12, 439-445; doi:10.1038/tpj.2011.13; published online 5 April 2011
Resumo:
Current methods for quality control of sugar cane are performed in extracted juice using several methodologies, often requiring appreciable time and chemicals (eventually toxic), making the methods not green and expensive. The present study proposes the use of X-ray spectrometry together with chemometric methods as an innovative and alternative technique for determining sugar cane quality parameters, specifically sucrose concentration, POL, and fiber content. Measurements in stem, leaf, and juice were performed, and those applied directly in stem provided the best results. Prediction models for sugar cane stem determinations with a single 60 s irradiation using portable X-ray fluorescence equipment allows estimating the % sucrose, % fiber, and POL simultaneously. Average relative deviations in the prediction step of around 8% are acceptable if considering that field measurements were done. These results may indicate the best period to cut a particular crop as well as for evaluating the quality of sugar cane for the sugar and alcohol industries.
Resumo:
Background: Brazil is the world's largest producer of sugarcane. Harvest is predominantly manual, exposing workers to health risks: intense physical exertion, heat, pollutants from sugarcane burning. Design: Panel study to evaluate the effects of burnt sugarcane harvesting on blood markers and on cardiovascular system. Methods: Twenty-eight healthy male workers, living in the countryside of Brazil were submitted to blood markers, blood pressure, heart rate variability, cardiopulmonary exercise testing, sympathetic nerve activity evaluation and forearm blood flow measures (venous occlusion plethysmography) during burnt sugarcane harvesting and four months later while they performed other activities in sugar cane culture. Results: Mean participant age was 31 +/- 6.3 years, and had worked for 9.8 +/- 8.4 years on sugarcane work. Work during the harvest period was associated with higher serum levels of Creatine Kinase - 136.5 U/L (IQR: 108.5-216.0) vs. 104.5 U/L (IQR: 77.5-170.5), (p = 0.001); plasma Malondialdehyde-7.5 +/- 1.4 mu M/dl vs. 6.9 +/- 1.0 mu M/dl, (p = 0.058); Glutathione Peroxidase - 55.1 +/- 11.8 Ug/Hb vs. 39.5 +/- 9.5 Ug/Hb, (p < 0.001); Glutathione Transferase- 3.4 +/- 1.3 Ug/Hb vs. 3.0 +/- 1.3 Ug/Hb, (p = 0.001); and 24-hour systolic blood pressure - 120.1 +/- 10.3 mmHg vs. 117.0 +/- 10.0 mmHg, (p = 0.034). In cardiopulmonary exercise testing, rest-to-peak diastolic blood pressure increased by 11.12 mmHg and 5.13 mmHg in the harvest and non-harvest period, respectively. A 10 miliseconds reduction in rMSSD and a 10 burst/min increase in sympathetic nerve activity were associated to 2.2 and 1.8 mmHg rises in systolic arterial pressure, respectively. Conclusion: Work in burnt sugarcane harvesting was associated with changes in blood markers and higher blood pressure, which may be related to autonomic imbalance.
Resumo:
The designation of biodiesel as an environmental-friendly alternative to diesel oil has improved its commercialization and use. However, most biodiesel environmental safety studies refer to air pollution and so far there have been very few literature data about its impacts upon other biotic systems, e.g. water, and exposed organisms. Spill simulations in water were carried out with neat diesel and biodiesel and their blends aiming at assessing their genotoxic potentials should there be contaminations of water systems. The water soluble fractions (WSF) from the spill simulations were submitted to solid phase extraction with C-18 cartridge and the extracts obtained were evaluated carrying out genotoxic and mutagenic bioassays [the Salmonella assay and the in vitro MicroFlow (R) kit (Litron) assay]. Mutagenic and genotoxic effects were observed, respectively, in the Salmonella/microsome preincubation assay and the in vitro MN test carried out with the biodiesel WSF. This interesting result may be related to the presence of pollutants in biodiesel derived from the raw material source used in its production chain. The data showed that care while using biodiesel should be taken to avoid harmful effects on living organisms in cases of water pollution. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Abstract Background Smear negative pulmonary tuberculosis (SNPT) accounts for 30% of pulmonary tuberculosis cases reported yearly in Brazil. This study aimed to develop a prediction model for SNPT for outpatients in areas with scarce resources. Methods The study enrolled 551 patients with clinical-radiological suspicion of SNPT, in Rio de Janeiro, Brazil. The original data was divided into two equivalent samples for generation and validation of the prediction models. Symptoms, physical signs and chest X-rays were used for constructing logistic regression and classification and regression tree models. From the logistic regression, we generated a clinical and radiological prediction score. The area under the receiver operator characteristic curve, sensitivity, and specificity were used to evaluate the model's performance in both generation and validation samples. Results It was possible to generate predictive models for SNPT with sensitivity ranging from 64% to 71% and specificity ranging from 58% to 76%. Conclusion The results suggest that those models might be useful as screening tools for estimating the risk of SNPT, optimizing the utilization of more expensive tests, and avoiding costs of unnecessary anti-tuberculosis treatment. Those models might be cost-effective tools in a health care network with hierarchical distribution of scarce resources.
Resumo:
Abstract Background Air pollution in São Paulo is constantly being measured by the State of Sao Paulo Environmental Agency, however there is no information on the variation between places with different traffic densities. This study was intended to identify a gradient of exposure to traffic-related air pollution within different areas in São Paulo to provide information for future epidemiological studies. Methods We measured NO2 using Palmes' diffusion tubes in 36 sites on streets chosen to be representative of different road types and traffic densities in São Paulo in two one-week periods (July and August 2000). In each study period, two tubes were installed in each site, and two additional tubes were installed in 10 control sites. Results Average NO2 concentrations were related to traffic density, observed on the spot, to number of vehicles counted, and to traffic density strata defined by the city Traffic Engineering Company (CET). Average NO2concentrations were 63μg/m3 and 49μg/m3 in the first and second periods, respectively. Dividing the sites by the observed traffic density, we found: heavy traffic (n = 17): 64μg/m3 (95% CI: 59μg/m3 – 68μg/m3); local traffic (n = 16): 48μg/m3 (95% CI: 44μg/m3 – 52μg/m3) (p < 0.001). Conclusion The differences in NO2 levels between heavy and local traffic sites are large enough to suggest the use of a more refined classification of exposure in epidemiological studies in the city. Number of vehicles counted, traffic density observed on the spot and traffic density strata defined by the CET might be used as a proxy for traffic exposure in São Paulo when more accurate measurements are not available.
Resumo:
O objetivo básico do trabalho foi avaliar os custos econômicos relacionados às doenças dos aparelhos respiratório e circulatório no município de Cubatão (SP). Para tanto, foram utilizados dados de internação e dias de trabalho perdidos com a internação (na faixa dos 14 aos 70 anos de idade), na base de dados do Sistema Único de Saúde (SUS). Resultados: A partir dos dados levantados, calculou-se o valor total de R$ 22,1 milhões gastos no período de 2000 a 2009 devido às doenças dos aparelhos circulatório e respiratório. Parte desses gastos pode estar diretamente relacionada à emissão de poluentes atmosféricos no município. Para se estimar os custos da poluição foram levantados dados de outros dois municípios da Região da Baixada Santista (Guarujá e Peruíbe), com menor atividade industrial em comparação a Cubatão. Verificou-se que, em ambos, as médias de gastos per capita em relação às duas doenças são menores do que em Cubatão, mas que essa diferença vem diminuindo sensivelmente nos últimos anos.
Resumo:
O estudo teve por objetivo construir um modelo de regressão baseada no uso do solo para predizer a concentração material particulado inalável (MP10) no município de São Paulo, Brasil. O estudo se baseou na média de MP10 de 2007 de 9 estações de monitoramento. Obtiveram-se dados demográficos, viários e de uso do solo em círculos concêntricos de 250 a 1.000 m para compor o modelo. Calculou-se regressão linear simples para selecionar as variáveis mais robustas e sem colinearidade. Quatro variáveis entraram no modelo de regressão múltipla. Somente tráfego leve em círculos concêntricos <250 m permaneceu no modelo final, que explicou 63,8% da variância de MP10. Verificou-se que o método de regressão baseada no uso do solo é rápido, de fácil execução. Entretanto, este modelo se baseou em medições de MP10 de poucos locais.
Resumo:
A poluição atmosférica encontra-se presente nos mais diferentes cenários ao longo dos últimos 250 anos, desde que a Revolução Industrial acelerou o processo de emissão de poluentes que, até então, estava limitado ao uso doméstico de combustíveis vegetais e minerais e às emissões vulcânicas intermitentes. Hoje, aproximadamente 50% da população do planeta vivem em cidades e aglomerados urbanos e estão expostas a níveis progressivamente maiores de poluentes do ar. Este estudo é uma revisão não sistemática sobre os diferentes tipos e fontes de poluentes do ar e os efeitos respiratórios atribuídos à exposição a esses contaminantes. Podem ser creditados aos poluentes particulados e gasosos, emitidos por diferentes fontes, aumentos nos sintomas de doenças, na procura por atendimentos em serviços de emergência e no número de internações e de óbitos. Mais do que descompensar doenças pré-existentes, exposições crônicas têm ajudado a aumentar o número de casos novos de asma, de DPOC e de câncer de pulmão, tanto em áreas urbanas quanto em áreas rurais, fazendo com que os poluentes atmosféricos rivalizem com a fumaça do tabaco pelo papel de principal fator de risco para estas doenças. Na rotina de clínicos e pneumologistas, esperamos contribuir para consolidar a importância da investigação sobre a exposição aos poluentes do ar e o reconhecimento de que esse fator de risco merece ser levado em conta na adoção da melhor terapêutica para o controle das descompensações agudas das doenças respiratórias e para a sua manutenção entre as crises.
Resumo:
In Performance-Based Earthquake Engineering (PBEE), evaluating the seismic performance (or seismic risk) of a structure at a designed site has gained major attention, especially in the past decade. One of the objectives in PBEE is to quantify the seismic reliability of a structure (due to the future random earthquakes) at a site. For that purpose, Probabilistic Seismic Demand Analysis (PSDA) is utilized as a tool to estimate the Mean Annual Frequency (MAF) of exceeding a specified value of a structural Engineering Demand Parameter (EDP). This dissertation focuses mainly on applying an average of a certain number of spectral acceleration ordinates in a certain interval of periods, Sa,avg (T1,…,Tn), as scalar ground motion Intensity Measure (IM) when assessing the seismic performance of inelastic structures. Since the interval of periods where computing Sa,avg is related to the more or less influence of higher vibration modes on the inelastic response, it is appropriate to speak about improved IMs. The results using these improved IMs are compared with a conventional elastic-based scalar IMs (e.g., pseudo spectral acceleration, Sa ( T(¹)), or peak ground acceleration, PGA) and the advanced inelastic-based scalar IM (i.e., inelastic spectral displacement, Sdi). The advantages of applying improved IMs are: (i ) "computability" of the seismic hazard according to traditional Probabilistic Seismic Hazard Analysis (PSHA), because ground motion prediction models are already available for Sa (Ti), and hence it is possibile to employ existing models to assess hazard in terms of Sa,avg, and (ii ) "efficiency" or smaller variability of structural response, which was minimized to assess the optimal range to compute Sa,avg. More work is needed to assess also "sufficiency" and "scaling robustness" desirable properties, which are disregarded in this dissertation. However, for ordinary records (i.e., with no pulse like effects), using the improved IMs is found to be more accurate than using the elastic- and inelastic-based IMs. For structural demands that are dominated by the first mode of vibration, using Sa,avg can be negligible relative to the conventionally-used Sa (T(¹)) and the advanced Sdi. For structural demands with sign.cant higher-mode contribution, an improved scalar IM that incorporates higher modes needs to be utilized. In order to fully understand the influence of the IM on the seismis risk, a simplified closed-form expression for the probability of exceeding a limit state capacity was chosen as a reliability measure under seismic excitations and implemented for Reinforced Concrete (RC) frame structures. This closed-form expression is partuclarly useful for seismic assessment and design of structures, taking into account the uncertainty in the generic variables, structural "demand" and "capacity" as well as the uncertainty in seismic excitations. The assumed framework employs nonlinear Incremental Dynamic Analysis (IDA) procedures in order to estimate variability in the response of the structure (demand) to seismic excitations, conditioned to IM. The estimation of the seismic risk using the simplified closed-form expression is affected by IM, because the final seismic risk is not constant, but with the same order of magnitude. Possible reasons concern the non-linear model assumed, or the insufficiency of the selected IM. Since it is impossibile to state what is the "real" probability of exceeding a limit state looking the total risk, the only way is represented by the optimization of the desirable properties of an IM.
Resumo:
[EN]A three-dimensional air pollution model for the short-term simulation of emission, transport and reaction of pollutants is presented. In the finite element simulation of these environmental processes over a complex terrain, a mesh generator capable of adapting itself to the topographic characteristics is essential, A local refinement of tetrahedra is used in order to capture the plume rise. Then a wind field is computed by using a mass-consistent model and perturbing its vertical component to introduce the plume rise effect. Finally, an Eulerian convection-diffusionreaction model is used to simulate the pollutant dispersion…
Resumo:
[ES]En la presente tesis se estudia el efecto de la contaminación atmosférica sobre líquenes en el entorno de un área industrial en el sur de Gran Canaria, estableciéndose 12 estaciones de muestreo, con el fin de seguir a lo largo del tiempo el recbimiento y vitalidad de las especies estudiadas.
Resumo:
Allergies are a complex of symptoms derived from altered IgE-mediated reactions of the immune system towards substances known as allergens. Allergic sensibilization can be of food or respiratory origin and, in particular, apple and hazelnut allergens have been identified in pollens or fruits. Allergic cross-reactivity can occur in a patient reacting to similar allergens from different origins, justifying the research in both systems as in Europe a greater number of people suffers from apple fruit allergy, but little evidence exists about pollen. Apple fruit allergies are due to four different classes of allergens (Mal d 1, 2, 3, 4), whose allergenicity is related both to genotype and tissue specificity; therefore I have investigated their presence also in pollen at different time of germination to clarify the apple pollen allergenic potential. I have observed that the same four classes of allergens found in fruit are expressed at different levels also in pollen, and their presence might support that the apple pollen can be considered allergenic as the fruit, deducing that apple allergy could also be indirectly caused by sensitization to pollen. Climate changes resulting from increases in temperature and air pollution influence pollen allergenicity, responsible for the dramatic raise in respiratory allergies (hay fever, bronchial asthma, conjunctivitis). Although the link between climate change and pollen allergenicity is proven, the underlying mechanism is little understood. Transglutaminases (TGases), a class of enzymes able to post-translationally modify proteins, are activated under stress and involved in some inflammatory responses, enhancing the activity of pro-inflammatory phospholipase A2, suggesting a role in allergies. Recently, a calcium-dependent TGase activity has been identified in the pollen cell wall, raising the possibility that pollen TGase may have a role in the modification of pollen allergens reported above, thus stabilizing them against proteases. This enzyme can be involved also in the transamidation of proteins present in the human mucosa interacting with surface pollen or, finally, the enzyme itself can represent an allergen, as suggested by studies on celiac desease. I have hypothesized that this pollen enzyme can be affected by climate changes and be involved in exhacerbating allergy response. The data presented in this thesis represent a scientific basis for future development of studies devoted to verify the hypothesis set out here. First, I have demonstrated the presence of an extracellular TGase on the surface of the grain observed either at the apical or the proximal parts of the pollen-tube by laser confocal microscopy (Iorio et al., 2008), that plays an essential role in apple pollen-tube growth, as suggested by the arrest of tube elongation by TGase inhibitors, such as EGTA or R281. Its involvement in pollen tube growth is mainly confirmed by the data of activity and gene expression, because TGase showed a peak between 15 min and 30 min of germination, when this process is well established, and an optimal pH around 6.5, which is close to that recorded for the germination medium. Moreover, data show that pollen TGase can be a glycoprotein as the glycosylation profile is linked both with the activation of the enzyme and with its localization at the pollen cell wall during germination, because from the data presented seems that the active form of TGase involved in pollen tube growth and pollen-stylar interaction is more exposed and more weakly bound to the cell wall. Interestingly, TGase interacts with fibronectin (FN), a putative SAMs or psECM component, inducing possibly intracellular signal transduction during the interaction between pollen-stylar occuring in the germination process, since a protein immunorecognised by anti-FN antibody is also present in pollen, in particular at the level of pollen grain cell wall in a punctuate pattern, but also along the shank of the pollen tube wall, in a similar pattern that recalls the signal obtained with the antibody anti TGase. FN represents a good substrate for the enzyme activity, better than DMC usually used as standard substrate for animal TGase. Thus, this pollen enzyme, necessary for its germination, is exposed on the pollen surface and consequently can easily interact with mucosal proteins, as it has been found germinated pollen in studies conducted on human mucus (Forlani, personal communication). I have obtained data that TGase activity increases in a very remarkable way when pollen is exposed to stressful conditions, such as climate changes and environmental pollution. I have used two different species of pollen, an aero allergenic (hazelnut, Corylus avellana) pollen, whose allergenicity is well documented, and an enthomophylus (apple, Malus domestica) pollen, which is not yet well characterized, to compare data on their mechanism of action in response to stressors. The two pollens have been exposed to climate changes (different temperatures, relative humidity (rH), acid rain at pH 5.6 and copper pollution (3.10 µg/l)) and showed an increase in pollen surface TGase activity that is not accompanied to an induced expression of TGase immunoreactive protein with AtPNG1p. Probably, climate change induce an alteration or damage to pollen cell wall that carries the pollen grains to release their content in the medium including TGase enzyme, that can be free to carry out its function as confirmed by the immunolocalisation and by the in situ TGase activity assay data; morphological examination indicated pollen damage, viability significantly reduced and in acid rain conditions an early germination of apple pollen, thus possibly enhancing the TGase exposure on pollen surface. Several pollen proteins were post-translationally modified, as well as mammalian sPLA2 especially with Corylus pollen, which results in its activation, potentially altering pollen allergenicity and inflammation. Pollen TGase activity mimicked the behaviour of gpl TGase and AtPNG1p in the stimulation of sPLA2, even if the regulatory mechanism seems different to gpl TGase, because pollen TGase favours an intermolecular cross-linking between various molecules of sPLA2, giving rise to high-molecular protein networks normally more stable. In general, pollens exhibited a significant endogenous phospholipase activity and it has been observed differences according to the allergenic (Corylus) or not-well characterized allergenic (Malus) attitude of the pollen. However, even if with a different intensity level in activation, pollen enzyme share the ability to activate the sPLA2, thus suggesting an important regulatory role for the activation of a key enzyme of the inflammatory response, among which my interest was addressed to pollen allergy. In conclusion, from all the data presented, mainly presence of allergens, presence of an extracellular TGase, increasing in its activity following exposure to environmental pollution and PLA2 activation, I can conclude that also Malus pollen can behave as potentially allergenic. The mechanisms described here that could affect the allergenicity of pollen, maybe could be the same occurring in fruit, paving the way for future studies in the identification of hyper- and hypo- allergenic cultivars, in preventing environmental stressor effects and, possibly, in the production of transgenic plants.
Resumo:
Die Verifikation numerischer Modelle ist für die Verbesserung der Quantitativen Niederschlagsvorhersage (QNV) unverzichtbar. Ziel der vorliegenden Arbeit ist die Entwicklung von neuen Methoden zur Verifikation der Niederschlagsvorhersagen aus dem regionalen Modell der MeteoSchweiz (COSMO-aLMo) und des Globalmodells des Europäischen Zentrums für Mittelfristvorhersage (engl.: ECMWF). Zu diesem Zweck wurde ein neuartiger Beobachtungsdatensatz für Deutschland mit stündlicher Auflösung erzeugt und angewandt. Für die Bewertung der Modellvorhersagen wurde das neue Qualitätsmaß „SAL“ entwickelt. Der neuartige, zeitlich und räumlich hoch-aufgelöste Beobachtungsdatensatz für Deutschland wird mit der während MAP (engl.: Mesoscale Alpine Program) entwickelten Disaggregierungsmethode erstellt. Die Idee dabei ist, die zeitlich hohe Auflösung der Radardaten (stündlich) mit der Genauigkeit der Niederschlagsmenge aus Stationsmessungen (im Rahmen der Messfehler) zu kombinieren. Dieser disaggregierte Datensatz bietet neue Möglichkeiten für die quantitative Verifikation der Niederschlagsvorhersage. Erstmalig wurde eine flächendeckende Analyse des Tagesgangs des Niederschlags durchgeführt. Dabei zeigte sich, dass im Winter kein Tagesgang existiert und dies vom COSMO-aLMo gut wiedergegeben wird. Im Sommer dagegen findet sich sowohl im disaggregierten Datensatz als auch im COSMO-aLMo ein deutlicher Tagesgang, wobei der maximale Niederschlag im COSMO-aLMo zu früh zwischen 11-14 UTC im Vergleich zu 15-20 UTC in den Beobachtungen einsetzt und deutlich um das 1.5-fache überschätzt wird. Ein neues Qualitätsmaß wurde entwickelt, da herkömmliche, gitterpunkt-basierte Fehlermaße nicht mehr der Modellentwicklung Rechnung tragen. SAL besteht aus drei unabhängigen Komponenten und basiert auf der Identifikation von Niederschlagsobjekten (schwellwertabhängig) innerhalb eines Gebietes (z.B. eines Flusseinzugsgebietes). Berechnet werden Unterschiede der Niederschlagsfelder zwischen Modell und Beobachtungen hinsichtlich Struktur (S), Amplitude (A) und Ort (L) im Gebiet. SAL wurde anhand idealisierter und realer Beispiele ausführlich getestet. SAL erkennt und bestätigt bekannte Modelldefizite wie das Tagesgang-Problem oder die Simulation zu vieler relativ schwacher Niederschlagsereignisse. Es bietet zusätzlichen Einblick in die Charakteristiken der Fehler, z.B. ob es sich mehr um Fehler in der Amplitude, der Verschiebung eines Niederschlagsfeldes oder der Struktur (z.B. stratiform oder kleinskalig konvektiv) handelt. Mit SAL wurden Tages- und Stundensummen des COSMO-aLMo und des ECMWF-Modells verifiziert. SAL zeigt im statistischen Sinne speziell für stärkere (und damit für die Gesellschaft relevante Niederschlagsereignisse) eine im Vergleich zu schwachen Niederschlägen gute Qualität der Vorhersagen des COSMO-aLMo. Im Vergleich der beiden Modelle konnte gezeigt werden, dass im Globalmodell flächigere Niederschläge und damit größere Objekte vorhergesagt werden. Das COSMO-aLMo zeigt deutlich realistischere Niederschlagsstrukturen. Diese Tatsache ist aufgrund der Auflösung der Modelle nicht überraschend, konnte allerdings nicht mit herkömmlichen Fehlermaßen gezeigt werden. Die im Rahmen dieser Arbeit entwickelten Methoden sind sehr nützlich für die Verifikation der QNV zeitlich und räumlich hoch-aufgelöster Modelle. Die Verwendung des disaggregierten Datensatzes aus Beobachtungen sowie SAL als Qualitätsmaß liefern neue Einblicke in die QNV und lassen angemessenere Aussagen über die Qualität von Niederschlagsvorhersagen zu. Zukünftige Anwendungsmöglichkeiten für SAL gibt es hinsichtlich der Verifikation der neuen Generation von numerischen Wettervorhersagemodellen, die den Lebenszyklus hochreichender konvektiver Zellen explizit simulieren.
Resumo:
Die Verifikation bewertet die Güte von quantitativen Niederschlagsvorhersagen(QNV) gegenüber Beobachtungen und liefert Hinweise auf systematische Modellfehler. Mit Hilfe der merkmals-bezogenen Technik SAL werden simulierte Niederschlagsverteilungen hinsichtlich (S)truktur, (A)mplitude und (L)ocation analysiert. Seit einigen Jahren werden numerische Wettervorhersagemodelle benutzt, mit Gitterpunktabständen, die es erlauben, hochreichende Konvektion ohne Parametrisierung zu simulieren. Es stellt sich jetzt die Frage, ob diese Modelle bessere Vorhersagen liefern. Der hoch aufgelöste stündliche Beobachtungsdatensatz, der in dieser Arbeit verwendet wird, ist eine Kombination von Radar- und Stationsmessungen. Zum einem wird damit am Beispiel der deutschen COSMO-Modelle gezeigt, dass die Modelle der neuesten Generation eine bessere Simulation des mittleren Tagesgangs aufweisen, wenn auch mit zu geringen Maximum und etwas zu spätem Auftreten. Im Gegensatz dazu liefern die Modelle der alten Generation ein zu starkes Maximum, welches erheblich zu früh auftritt. Zum anderen wird mit dem neuartigen Modell eine bessere Simulation der räumlichen Verteilung des Niederschlags, durch eine deutliche Minimierung der Luv-/Lee Proble-matik, erreicht. Um diese subjektiven Bewertungen zu quantifizieren, wurden tägliche QNVs von vier Modellen für Deutschland in einem Achtjahreszeitraum durch SAL sowie klassischen Maßen untersucht. Die höher aufgelösten Modelle simulieren realistischere Niederschlagsverteilungen(besser in S), aber bei den anderen Komponenten tritt kaum ein Unterschied auf. Ein weiterer Aspekt ist, dass das Modell mit der gröbsten Auf-lösung(ECMWF) durch den RMSE deutlich am besten bewertet wird. Darin zeigt sich das Problem des ‚Double Penalty’. Die Zusammenfassung der drei Komponenten von SAL liefert das Resultat, dass vor allem im Sommer das am feinsten aufgelöste Modell (COSMO-DE) am besten abschneidet. Hauptsächlich kommt das durch eine realistischere Struktur zustande, so dass SAL hilfreiche Informationen liefert und die subjektive Bewertung bestätigt. rnIm Jahr 2007 fanden die Projekte COPS und MAP D-PHASE statt und boten die Möglich-keit, 19 Modelle aus drei Modellkategorien hinsichtlich ihrer Vorhersageleistung in Südwestdeutschland für Akkumulationszeiträume von 6 und 12 Stunden miteinander zu vergleichen. Als Ergebnisse besonders hervorzuheben sind, dass (i) je kleiner der Gitter-punktabstand der Modelle ist, desto realistischer sind die simulierten Niederschlags-verteilungen; (ii) bei der Niederschlagsmenge wird in den hoch aufgelösten Modellen weniger Niederschlag, d.h. meist zu wenig, simuliert und (iii) die Ortskomponente wird von allen Modellen am schlechtesten simuliert. Die Analyse der Vorhersageleistung dieser Modelltypen für konvektive Situationen zeigt deutliche Unterschiede. Bei Hochdrucklagen sind die Modelle ohne Konvektionsparametrisierung nicht in der Lage diese zu simulieren, wohingegen die Modelle mit Konvektionsparametrisierung die richtige Menge, aber zu flächige Strukturen realisieren. Für konvektive Ereignisse im Zusammenhang mit Fronten sind beide Modelltypen in der Lage die Niederschlagsverteilung zu simulieren, wobei die hoch aufgelösten Modelle realistischere Felder liefern. Diese wetterlagenbezogene Unter-suchung wird noch systematischer unter Verwendung der konvektiven Zeitskala durchge-führt. Eine erstmalig für Deutschland erstellte Klimatologie zeigt einen einer Potenzfunktion folgenden Abfall der Häufigkeit dieser Zeitskala zu größeren Werten hin auf. Die SAL Ergebnisse sind für beide Bereiche dramatisch unterschiedlich. Für kleine Werte der konvektiven Zeitskala sind sie gut, dagegen werden bei großen Werten die Struktur sowie die Amplitude deutlich überschätzt. rnFür zeitlich sehr hoch aufgelöste Niederschlagsvorhersagen gewinnt der Einfluss der zeitlichen Fehler immer mehr an Bedeutung. Durch die Optimierung/Minimierung der L Komponente von SAL innerhalb eines Zeitfensters(+/-3h) mit dem Beobachtungszeit-punkt im Zentrum ist es möglich diese zu bestimmen. Es wird gezeigt, dass bei optimalem Zeitversatz die Struktur und Amplitude der QNVs für das COSMO-DE besser werden und damit die grundsätzliche Fähigkeit des Modells die Niederschlagsverteilung realistischer zu simulieren, besser gezeigt werden kann.