886 resultados para Surface-based deformationsMesh deformationsDiscrete ElasticaPython
Resumo:
We have used surface-based electrical resistivity tomography to detect and characterize preferential hydraulic pathways in the immediate downstream area of an abandoned, hazardous landfill. The landfill occupies the void left by a former gravel pit and its base is close to the groundwater table and lacking an engineered barrier. As such, this site is remarkably typical of many small- to medium-sized waste deposits throughout the densely populated and heavily industrialized foreland on both sides of the Alpine arc. Outflows of pollutants lastingly contaminated local drinking water supplies and necessitated a partial remediation in the form of a synthetic cover barrier, which is meant to prevent meteoric water from percolating through the waste before reaching the groundwater table. Any future additional isolation of the landfill in the form of lateral barriers thus requires adequate knowledge of potential preferential hydraulic pathways for outflowing contaminants. Our results, inferred from a suite of tomographically inverted surfaced-based electrical resistivity profiles oriented roughly perpendicular to the local hydraulic gradient, indicate that potential contaminant outflows would predominantly occur along an unexploited lateral extension of the original gravel deposit. This finds its expression as a distinct and laterally continuous high-resistivity anomaly in the resistivity tomograms. This interpretation is ground-truthed through a litholog from a nearby well. Since the probed glacio-fluvial deposits are largely devoid of mineralogical clay, the geometry of hydraulic and electrical pathways across the pore space of a given lithological unit can be assumed to be identical, which allows for an order-of-magnitude estimation of the overall permeability structure. These estimates indicate that the permeability of the imaged extension of the gravel body is at least two to three orders-of-magnitude higher than that of its finer-grained embedding matrix. This corroborates the preeminent role of the high-resistivity anomaly as a potential preferential flow path.
Resumo:
Estimation of the spatial statistics of subsurface velocity heterogeneity from surface-based geophysical reflection survey data is a problem of significant interest in seismic and ground-penetrating radar (GPR) research. A method to effectively address this problem has been recently presented, but our knowledge regarding the resolution of the estimated parameters is still inadequate. Here we examine this issue using an analytical approach that is based on the realistic assumption that the subsurface velocity structure can be characterized as a band-limited scale-invariant medium. Our work importantly confirms recent numerical findings that the inversion of seismic or GPR reflection data for the geostatistical properties of the probed subsurface region is sensitive to the aspect ratio of the velocity heterogeneity and to the decay of its power spectrum, but not to the individual values of the horizontal and vertical correlation lengths.
Resumo:
Surface-based ground penetrating radar (GPR) and electrical resistance tomography (ERT) are common tools for aquifer characterization, because both methods provide data that are sensitive to hydrogeologically relevant quantities. To retrieve bulk subsurface properties at high resolution, we suggest incorporating structural information derived from GPR reflection data when inverting surface ERT data. This reduces resolution limitations, which might hinder quantitative interpretations. Surface-based GPR reflection and ERT data have been recorded on an exposed gravel bar within a restored section of a previously channelized river in northeastern Switzerland to characterize an underlying gravel aquifer. The GPR reflection data acquired over an area of 240×40 m map the aquifer's thickness and two internal sub-horizontal regions with different depositional patterns. The interface between these two regions and the boundary of the aquifer with then underlying clay are incorporated in an unstructured ERT mesh. Subsequent inversions are performed without applying smoothness constraints across these boundaries. Inversion models obtained by using these structural constraints contain subtle resistivity variations within the aquifer that are hardly visible in standard inversion models as a result of strong vertical smearing in the latter. In the upper aquifer region, with high GPR coherency and horizontal layering, the resistivity is moderately high (N300 Ωm). We suggest that this region consists of sediments that were rearranged during more than a century of channelized flow. In the lower low coherency region, the GPR image reveals fluvial features (e.g., foresets) and generally more heterogeneous deposits. In this region, the resistivity is lower (~200 Ωm), which we attribute to increased amounts of fines in some of the well-sorted fluvial deposits. We also find elongated conductive anomalies that correspond to the location of river embankments that were removed in 2002.
Resumo:
Quantifying the spatial configuration of hydraulic conductivity (K) in heterogeneous geological environments is essential for accurate predictions of contaminant transport, but is difficult because of the inherent limitations in resolution and coverage associated with traditional hydrological measurements. To address this issue, we consider crosshole and surface-based electrical resistivity geophysical measurements, collected in time during a saline tracer experiment. We use a Bayesian Markov-chain-Monte-Carlo (McMC) methodology to jointly invert the dynamic resistivity data, together with borehole tracer concentration data, to generate multiple posterior realizations of K that are consistent with all available information. We do this within a coupled inversion framework, whereby the geophysical and hydrological forward models are linked through an uncertain relationship between electrical resistivity and concentration. To minimize computational expense, a facies-based subsurface parameterization is developed. The Bayesian-McMC methodology allows us to explore the potential benefits of including the geophysical data into the inverse problem by examining their effect on our ability to identify fast flowpaths in the subsurface, and their impact on hydrological prediction uncertainty. Using a complex, geostatistically generated, two-dimensional numerical example representative of a fluvial environment, we demonstrate that flow model calibration is improved and prediction error is decreased when the electrical resistivity data are included. The worth of the geophysical data is found to be greatest for long spatial correlation lengths of subsurface heterogeneity with respect to wellbore separation, where flow and transport are largely controlled by highly connected flowpaths.
Resumo:
The City of Marquette lies in the 65,000 acre Mississippi River watershed, and is surrounded by steep bluffs. Though scenic, controlling water runoff during storm events presents significant challenges. Flash-flooding from the local watershed has plagued the city for decades. The people of Marquette have committed to preserve the water quality of key natural resources in the area including the Bloody Run Creek and associated wetlands by undertaking projects to control the spread of debris and sediment caused by excess runoff during area storm events. Following a July 2007 storm (over 8” of rain in 24 hours) which caused unprecedented flood damage, the City retained an engineering firm to study the area and provide recommendations to eliminate or greatly reduce uncontrolled runoff into the Bloody Run Creek wetland, infrastructure damage and personal property loss. Marquette has received Iowa Great Places designation, and has demonstrated its commitment to wetland preservation with the construction of Phase I of this water quality project. The Bench Area Storm Water Management Plan prepared by the City in 2008 made a number of recommendations to mitigate flash flooding by improving storm water conveyance paths, detention, and infrastructure within the Bench area. Due to steep slopes and rocky geography, infiltration based systems, though desirable, would not be an option over surface based systems. Runoff from the 240 acre watershed comes primarily from large, steep drainage areas to the south and west, flowing to the Bench area down three hillside routes; designated as South East, South Central and South West. Completion of Phase I, which included an increased storage capacity of the upper pond, addressed the South East and South Central areas. The increased upper pond capacity will now allow Phase II to proceed. Phase II will address runoff from the South West drainage area; which engineers have estimated to produce as much water volume as the South Central and South East areas combined. Total costs for Phase I are $1.45 million, of which Marquette has invested $775,000, and IJOBS funding contributed $677,000. Phase II costs are estimated at $617,000. WIRB funding support of $200,000 would expedite project completion, lessen the long term debt impact to the community and aid in the preservation of the Bloody Run Creek and adjoining wetlands more quickly than Marquette could accomplish on its own.
Resumo:
L'aquifère du Seeland représente une richesse en ressources hydriques qu'il est impératif de préserver contre tout risque de détérioration. Cet aquifère prolifique est constitué principalement de sédiments alluviaux post-glaciaires (graviers, sables et limons). Il est soumis aux contraintes environnementales des pratiques d'agriculture intensive, du réseau routier, des villes et de leurs activités industrielles. La connaissance optimale de ces ressources est donc primordiale pour leur protection. Dans cette optique, deux sites Kappelen et Grenchen représentatifs de l'aquifère du Seeland ont été étudiés. L'objectif de ce travail est de caractériser d'un point de vue hydrogéophysique l'aquifère au niveau de ces deux sites, c'est-à-dire, comprendre la dynamique des écoulements souterrains par l'application des méthodes électriques de surface associées aux diagraphies en intégrant des méthodes hydrogéologiques. Pour le site de Kappelen, les méthodes électriques de surface ont permis d'identifier les différents faciès géoélectriques en présence et de mettre en évidence leur disposition en une structure tabulaire et horizontale. Il s'agit d'un aquifère libre constitué d'une série de graviers allant jusqu'à 15 m de profondeur reposant sur de la moraine argileuse. Les diagraphies électriques, nucléaires et du fluide ont servis à la détermination des caractéristiques pétrophysiques et hydrauliques de l'aquifère qui contrôlent son comportement hydrodynamique. Les graviers aquifères de Kappelen présentent deux minéraux dominants: quartz et calcite. Les analyses minéralogiques indiquent que ces deux éléments constituent 65 à 75% de la matrice. La porosité totale obtenue par les diagraphies nucléaires varie de 20 à 30 %, et de 22 à 29 % par diagraphies électrique. Avec les faibles valeurs de Gamma Ray ces résultats indiquent que l'aquifère des graviers de Kappelen est dépourvu d'argile minéralogique. La perméabilité obtenue par diagraphies du fluide varie de 3.10-4 à 5.10-2 m/s, et par essais de pompage de 10-4 à 10-2 m/s. Les résultats des analyses granulométriques indiquent une hétérogénéité granulométrique au niveau des graviers aquifères. La fraction de sables, sables très fins, silts et limons constitue de 10 à 40 %. Ces éléments jouent un rôle important dans le comportement hydraulique de l'aquifère. La porosité efficace de 11 à 25% estimée à partir des résultats des analyses granulométriques suppose que les zones les plus perméables correspondent aux zones les plus graveleuses du site. Etablie sur le site de Kappelen, cette méthodologie a été utilisée sur le site de Grenchen. Les méthodes électriques de surface indiquent que l'aquifère captif de Grenchen est constitué des sables silteux comprenant des passages sableux, encadrés par des silts argileux imperméables. L'aquifère de Grenchen est disposé dans une structure relativement tabulaire et horizontale. Son épaisseur totale peut atteindre les 25 m vers le sud et le sud ouest ou les passages sableux sont les plus importants. La détermination des caractéristiques pétrophysiques et hydrauliques s'est faite à l'aide des diagraphies. Les intensités Gamma Ray varient de 30 à 100 cps, les plus fortes valeurs n'indiquent qu'une présence d'éléments argileux mais pas de bancs d'argile. Les porosités totales de 15 à 25% et les densités globales de 2.25 à 2.45 g/cm3 indiquent que la phase minérale (matrice) est composée essentiellement de quartz et de calcaire. Les densités de matrice varient entre 2.65 et 2.75 g/cm3. La perméabilité varie de 2 10-6 à 5 10-4 m/s. La surestimation des porosités totales à partir des diagraphies électriques de 25 à 42% est due à la présence d'argiles. -- The vast alluvial Seeland aquifer system in northwestern Switzerland is subjected to environmental challenges due to intensive agriculture, roads, cities and industrial activities. Optimal knowledge of the hydrological resources of this aquifer system is therefore important for their protection. Two representative sites, Kappelen and Grenchen, of the Seeland aquifer were investigated using surface-based geoelectric methods and geophysical borehole logging methods. By integrating of hydrogeological and hydrogeophysical methods, a reliable characterization of the aquifer system at these two sites can be performed in order to better understand the governing flow and transport process. At the Kappelen site, surface-based geoelectric methods allowed to identify various geoelectric facies and highlighted their tabular and horizontal structure. It is an unconfined aquifer made up of 15 m thick gravels with an important sandy fraction and bounded by a shaly glacial aquitard. Electrical and nuclear logging measurements allow for constraining the petrophysical and hydrological parameters of saturated gravels. Results indicate that in agreement with mineralogical analyses, matrix of the probed formations is dominated by quartz and calcite with densities of 2.65 and 2.71 g/cc, respectively. These two minerals constitute approximately 65 to 75 % of the mineral matrix. Matrix density values vary from 2.60 to 2.75 g/cc. Total porosity values obtained from nuclear logs range from 20 to 30 % and are consistent with those obtained from electrical logs ranging from 22 to 29 %. Together with the inherently low natural gamma radiation and the matrix density values obtained from other nuclear logging measurements, this indicates that at Kappelen site the aquifer is essentially devoid of clay. Hydraulic conductivity values obtained by the Dilution Technique vary between 3.10-4 and 5.10-2 m/s, while pumping tests give values ranging from 10-4 to 10-2 m/s. Grain size analysis of gravel samples collected from boreholes cores reveal significant granulometric heterogeneity of these deposits. Calculations based on these granulometric data have shown that the sand-, silt- and shale-sized fractions constitute between 10 and 40 % of the sample mass. The presence of these fine elements in general and their spatial distribution in particular are important as they largely control the distribution of the total and effective porosity as well as the hydraulic conductivity. Effective porosity values ranging from 11 to 25% estimated from grain size analyses indicate that the zones of higher hydraulic conductivity values correspond to the zones dominated by gravels. The methodology established at the Kappelen site was then applied to the Grenchen site. Results from surface-based geoelectric measurements indicate that it is a confined aquifer made up predominantly of shaly sands with intercalated sand lenses confined impermeable shally clay. The Grenchen confined aquifer has a relatively tabular and horizontal structure with a maximum thickness of 25 m in the south and the southwest with important sand passages. Petrophysical and hydrological characteristics were performed using electrical and nuclear logging. Natural gamma radiation values ranging from 30 to 100 cps indicate presence of a clay fraction but not of pure clay layers. Total porosity values obtained from electrical logs vary form 25 to 42%, whereas those obtained from nuclear logs values vary from 15 to 25%. This over-estimation confirms presences of clays. Density values obtained from nuclear logs varying from 2.25 to 2.45 g/cc in conjunction with the total porosity values indicate that the dominating matrix minerals are quartz and calcite. Matrix density values vary between 2.65 and 2.75 g/cc. Hydraulic conductivity values obtained by the Dilution Technique vary from 2 10-6 to 5 10-4 m/s.
Resumo:
In this Master Thesis we discuss issues related to the measurement of the effective scattering surface, based on the Doppler Effect. Modeling of the detected signal was made. Narrowband signal filtering using low-frequency amplifier was observed. Parameters of the proposed horn antennas were studied; radar cross section charts for three different objects were received.
Resumo:
Die Miniaturisierung von konventioneller Labor- und Analysetechnik nimmt eine zentrale Rolle im Bereich der allgemeinen Lebenswissenschaften und medizinischen Diagnostik ein. Neuartige und preiswerte Technologieplattformen wie Lab-on-a-Chip (LOC) oder Mikrototalanalysesysteme (µTAS) versprechen insbesondere im Bereich der Individualmedizin einen hohen gesellschaftlichen Nutzen zur frühzeitigen und nichtinvasiven Diagnose krankheitsspezifischer Indikatoren. Durch den patientennahen Einsatz preiswerter und verlässlicher Mikrochips auf Basis hoher Qualitätsstandards entfallen kostspielige und zeitintensive Zentrallaboranalysen, was gleichzeitig Chancen für den globalen Einsatz - speziell in Schwellen- und Entwicklungsländern - bietet. Die technischen Herausforderungen bei der Realisierung moderner LOC-Systeme sind in der kontrollierten und verlässlichen Handhabung kleinster Flüssigkeitsmengen sowie deren diagnostischem Nachweis begründet. In diesem Kontext wird der erfolgreichen Integration eines fernsteuerbaren Transports von biokompatiblen, magnetischen Mikro- und Nanopartikeln eine Schlüsselrolle zugesprochen. Die Ursache hierfür liegt in der vielfältigen Einsetzbarkeit, die durch die einzigartigen Materialeigenschaften begründet sind. Diese reichen von der beschleunigten, aktiven Durchmischung mikrofluidischer Substanzvolumina über die Steigerung der molekularen Interaktionsrate in Biosensoren bis hin zur Isolation und Aufreinigung von krankheitsspezifischen Indikatoren. In der Literatur beschriebene Ansätze basieren auf der dynamischen Transformation eines makroskopischen, zeitabhängigen externen Magnetfelds in eine mikroskopisch veränderliche potentielle Energielandschaft oberhalb magnetisch strukturierter Substrate, woraus eine gerichtete und fernsteuerbare Partikelbewegung resultiert. Zentrale Kriterien, wie die theoretische Modellierung und experimentelle Charakterisierung der magnetischen Feldlandschaft in räumlicher Nähe zur Oberfläche der strukturierten Substrate sowie die theoretische Beschreibung der Durchmischungseffekte, wurden jedoch bislang nicht näher beleuchtet, obwohl diese essentiell für ein detailliertes Verständnis der zu Grunde liegenden Mechanismen und folglich für einen Markteintritt zukünftiger Geräte sind. Im Rahmen der vorgestellten Arbeit wurde daher ein neuartiger Ansatz zur erfolgreichen Integration eines Konzepts zum fernsteuerbaren Transport magnetischer Partikel zur Anwendung in modernen LOC-Systemen unter Verwendung von magnetisch strukturierten Exchange-Bias (EB) Dünnschichtsystemen verfolgt. Die Ergebnisse zeigen, dass sich das Verfahren der ionenbe-schussinduzierten magnetischen Strukturierung (IBMP) von EB-Systemen zur Herstellung von maßgeschneiderten magnetischen Feldlandschaften (MFL) oberhalb der Substratoberfläche, deren Stärke und räumlicher Verlauf auf Nano- und Mikrometerlängenskalen gezielt über die Veränderung der Materialparameter des EB-Systems via IBMP eingestellt werden kann, eignet. Im Zuge dessen wurden erstmals moderne, experimentelle Verfahrenstechniken (Raster-Hall-Sonden-Mikroskopie und rastermagnetoresistive Mikroskopie) in Kombination mit einem eigens entwickelten theoretischen Modell eingesetzt, um eine Abbildung der MFL in unterschiedlichen Abstandsbereichen zur Substratoberfläche zu realisieren. Basierend auf der quantitativen Kenntnis der MFL wurde ein neuartiges Konzept zum fernsteuerbaren Transport magnetischer Partikel entwickelt, bei dem Partikelgeschwindigkeiten im Bereich von 100 µm/s unter Verwendung von externen Magnetfeldstärken im Bereich weniger Millitesla erzielt werden können, ohne den magnetischen Zustand des Substrats zu modifizieren. Wie aus den Untersuchungen hervorgeht, können zudem die Stärke des externen Magnetfelds, die Stärke und der Gradient der MFL, das magnetfeldinduzierte magnetische Moment der Partikel sowie die Größe und der künstlich veränderliche Abstand der Partikel zur Substratoberfläche als zentrale Einflussgrößen zur quantitativen Modifikation der Partikelgeschwindigkeit genutzt werden. Abschließend wurde erfolgreich ein numerisches Simulationsmodell entwickelt, das die quantitative Studie der aktiven Durchmischung auf Basis des vorgestellten Partikeltransportkonzepts von theoretischer Seite ermöglicht, um so gezielt die geometrischen Gegebenheiten der mikrofluidischen Kanalstrukturen auf einem LOC-System für spezifische Anwendungen anzupassen.
Resumo:
"Expectation-Maximization'' (EM) algorithm and gradient-based approaches for maximum likelihood learning of finite Gaussian mixtures. We show that the EM step in parameter space is obtained from the gradient via a projection matrix $P$, and we provide an explicit expression for the matrix. We then analyze the convergence of EM in terms of special properties of $P$ and provide new results analyzing the effect that $P$ has on the likelihood surface. Based on these mathematical results, we present a comparative discussion of the advantages and disadvantages of EM and other algorithms for the learning of Gaussian mixture models.
Resumo:
We present a method for analyzing the curvature (second derivatives) of the conical intersection hyperline at an optimized critical point. Our method uses the projected Hessians of the degenerate states after elimination of the two branching space coordinates, and is equivalent to a frequency calculation on a single Born-Oppenheimer potential-energy surface. Based on the projected Hessians, we develop an equation for the energy as a function of a set of curvilinear coordinates where the degeneracy is preserved to second order (i.e., the conical intersection hyperline). The curvature of the potential-energy surface in these coordinates is the curvature of the conical intersection hyperline itself, and thus determines whether one has a minimum or saddle point on the hyperline. The equation used to classify optimized conical intersection points depends in a simple way on the first- and second-order degeneracy splittings calculated at these points. As an example, for fulvene, we show that the two optimized conical intersection points of C2v symmetry are saddle points on the intersection hyperline. Accordingly, there are further intersection points of lower energy, and one of C2 symmetry - presented here for the first time - is found to be the global minimum in the intersection space
Resumo:
The impact of selected observing systems on forecast skill is explored using the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-yr reanalysis (ERA-40) system. Analyses have been produced for a surface-based observing system typical of the period prior to 1945/1950, a terrestrial-based observing system typical of the period 1950-1979 and a satellite-based observing system consisting of surface pressure and satellite observations. Global prediction experiments have been undertaken using these analyses as initial states, and which are available every 6 h, for the boreal winters of 1990/1991 and 2000/2001 and the summer of 2000, using a more recent version of the ECMWF model. The results show that for 500-hPa geopotential height, as a representative field, the terrestrial system in the Northern Hemisphere extratropics is only slightly inferior to the control system, which makes use of all observations for the analysis, and is also more accurate than the satellite system. There are indications that the skill of the terrestrial system worsens slightly and the satellite system improves somewhat between 1990/1991 and 2000/2001. The forecast skill in the Southern Hemisphere is dominated by the satellite information and this dominance is larger in the latter period. The overall skill is only slightly worse than that of the Northern Hemisphere. In the tropics (20 degrees S-20 degrees N), using the wind at 850 and 250 hPa as representative fields, the information content in the terrestrial and satellite systems is almost equal and complementary. The surface-based system has very limited skill restricted to the lower troposphere of the Northern Hemisphere. Predictability calculations show a potential for a further increase in predictive skill of 1-2 d in the extratropics of both hemispheres, but a potential for a major improvement of many days in the tropics. As well as the Eulerian perspective of predictability, the storm tracks have been calculated from all experiments and validated for the extratropics to provide a Lagrangian perspective.
Resumo:
The impact of selected observing systems on the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-yr reanalysis (ERA40) is explored by mimicking observational networks of the past. This is accomplished by systematically removing observations from the present observational data base used by ERA40. The observing systems considered are a surface-based system typical of the period prior to 1945/50, obtained by only retaining the surface observations, a terrestrial-based system typical of the period 1950-1979, obtained by removing all space-based observations, and finally a space-based system, obtained by removing all terrestrial observations except those for surface pressure. Experiments using these different observing systems have been limited to seasonal periods selected from the last 10 yr of ERA40. The results show that the surface-based system has severe limitations in reconstructing the atmospheric state of the upper troposphere and stratosphere. The terrestrial system has major limitations in generating the circulation of the Southern Hemisphere with considerable errors in the position and intensity of individual weather systems. The space-based system is able to analyse the larger-scale aspects of the global atmosphere almost as well as the present observing system but performs less well in analysing the smaller-scale aspects as represented by the vorticity field. Here, terrestrial data such as radiosondes and aircraft observations are of paramount importance. The terrestrial system in the form of a limited number of radiosondes in the tropics is also required to analyse the quasi-biennial oscillation phenomenon in a proper way. The results also show the dominance of the satellite observing system in the Southern Hemisphere. These results all indicate that care is required in using current reanalyses in climate studies due to the large inhomogeneity of the available observations, in particular in time.
Resumo:
A new method for assessing forecast skill and predictability that involves the identification and tracking of extratropical cyclones has been developed and implemented to obtain detailed information about the prediction of cyclones that cannot be obtained from more conventional analysis methodologies. The cyclones were identified and tracked along the forecast trajectories, and statistics were generated to determine the rate at which the position and intensity of the forecasted storms diverge from the analyzed tracks as a function of forecast lead time. The results show a higher level of skill in predicting the position of extratropical cyclones than the intensity. They also show that there is potential to improve the skill in predicting the position by 1 - 1.5 days and the intensity by 2 - 3 days, via improvements to the forecast model. Further analysis shows that forecasted storms move at a slower speed than analyzed storms on average and that there is a larger error in the predicted amplitudes of intense storms than the weaker storms. The results also show that some storms can be predicted up to 3 days before they are identified as an 850-hPa vorticity center in the analyses. In general, the results show a higher level of skill in the Northern Hemisphere (NH) than the Southern Hemisphere (SH); however, the rapid growth of NH winter storms is not very well predicted. The impact that observations of different types have on the prediction of the extratropical cyclones has also been explored, using forecasts integrated from analyses that were constructed from reduced observing systems. A terrestrial, satellite, and surface-based system were investigated and the results showed that the predictive skill of the terrestrial system was superior to the satellite system in the NH. Further analysis showed that the satellite system was not very good at predicting the growth of the storms. In the SH the terrestrial system has significantly less skill than the satellite system, highlighting the dominance of satellite observations in this hemisphere. The surface system has very poor predictive skill in both hemispheres.
Resumo:
Measurements of the top‐of‐the‐atmosphere outgoing longwave radiation (OLR) for July 2003 from Meteosat‐7 are used to assess the performance of the numerical weather prediction version of the Met Office Unified Model. A significant difference is found over desert regions of northern Africa where the model emits too much OLR by up to 35 Wm−2 in the monthly mean. By cloud‐screening the data we find an error of up to 50 Wm−2 associated with cloud‐free areas, which suggests an error in the model surface temperature, surface emissivity, or atmospheric transmission. By building up a physical model of the radiative properties of mineral dust based on in situ, and surface‐based and satellite remote sensing observations we show that the most plausible explanation for the discrepancy in OLR is due to the neglect of mineral dust in the model. The calculations suggest that mineral dust can exert a longwave radiative forcing by as much as 50 Wm−2 in the monthly mean for 1200 UTC in cloud‐free regions, which accounts for the discrepancy between the model and the Meteosat‐7 observations. This suggests that inclusion of the radiative effects of mineral dust will lead to a significant improvement in the radiation balance of numerical weather prediction models with subsequent improvements in performance.
Resumo:
The Aerosol Direct Radiative Experiment (ADRIEX) took place over the Adriatic and Black Seas during August and September 2004 with the aim of characterizing anthropogenic aerosol in these regions in terms of its physical and optical properties and establishing its impact on radiative balance. Eight successful flights of the UK BAE-146 Facility for Atmospheric Airborne Measurements were completed together with surface-based lidar and AERONET measurements, in conjunction with satellite overpasses. This paper outlines the motivation for the campaign, the methodology and instruments used, describes the synoptic situation and provides an overview of the key results. ADRIEX successfully measured a range of aerosol conditions across the northern Adriatic, Po Valley and Black Sea. Generally two layers of aerosol were found in the vertical: in the flights over the Black Sea and the Po Valley these showed differences in chemical and microphysical properties, whilst over the Adriatic the layers were often more similar. Nitrate aerosol was found to be important in the Po Valley region. The use of new instruments to measure the aerosol chemistry and mixing state and to use this information in determining optical properties is demonstrated. These results are described in much more detail in the subsequent papers of this special issue.