967 resultados para Capture-recapture Data
Resumo:
The aim of this master´s thesis is to study which processes increase the auxiliary power consumption in carbon capture and storage processes and if it is possible to reduce the auxiliary power consumption with variable speed drives. Also the cost of carbon capture and storage is studied. Data about auxiliary power consumption in carbon capture is gathered from various studies and estimates made by various research centres. Based on these studies a view is presented how the power auxiliary power consumption is divided between different processes in carbon capture processes. In a literary study, the operation of three basic carbon capture systems is described. Also different methods to transport carbon dioxide and carbon dioxide storage options are described in this section. At the end of the thesis processes that consume most of the auxiliary power are defined and possibilities to reduce the auxiliary power consumption are evaluated. Cost of carbon capture, transport and storage are also evaluated at this point and in the case that the carbon capture and storage systems are fully deployed. According to the results, it can be estimated what are the processes are where variable speed drives can be used and what kind of cost and power consumption reduction could be achieved. Results also show how large a project carbon capture and storage is if it is fully deployed.
Resumo:
In the power market, electricity prices play an important role at the economic level. The behavior of a price trend usually known as a structural break may change over time in terms of its mean value, its volatility, or it may change for a period of time before reverting back to its original behavior or switching to another style of behavior, and the latter is typically termed a regime shift or regime switch. Our task in this thesis is to develop an electricity price time series model that captures fat tailed distributions which can explain this behavior and analyze it for better understanding. For NordPool data used, the obtained Markov Regime-Switching model operates on two regimes: regular and non-regular. Three criteria have been considered price difference criterion, capacity/flow difference criterion and spikes in Finland criterion. The suitability of GARCH modeling to simulate multi-regime modeling is also studied.
Resumo:
En écologie, dans le cadre par exemple d’études des services fournis par les écosystèmes, les modélisations descriptive, explicative et prédictive ont toutes trois leur place distincte. Certaines situations bien précises requièrent soit l’un soit l’autre de ces types de modélisation ; le bon choix s’impose afin de pouvoir faire du modèle un usage conforme aux objectifs de l’étude. Dans le cadre de ce travail, nous explorons dans un premier temps le pouvoir explicatif de l’arbre de régression multivariable (ARM). Cette méthode de modélisation est basée sur un algorithme récursif de bipartition et une méthode de rééchantillonage permettant l’élagage du modèle final, qui est un arbre, afin d’obtenir le modèle produisant les meilleures prédictions. Cette analyse asymétrique à deux tableaux permet l’obtention de groupes homogènes d’objets du tableau réponse, les divisions entre les groupes correspondant à des points de coupure des variables du tableau explicatif marquant les changements les plus abrupts de la réponse. Nous démontrons qu’afin de calculer le pouvoir explicatif de l’ARM, on doit définir un coefficient de détermination ajusté dans lequel les degrés de liberté du modèle sont estimés à l’aide d’un algorithme. Cette estimation du coefficient de détermination de la population est pratiquement non biaisée. Puisque l’ARM sous-tend des prémisses de discontinuité alors que l’analyse canonique de redondance (ACR) modélise des gradients linéaires continus, la comparaison de leur pouvoir explicatif respectif permet entre autres de distinguer quel type de patron la réponse suit en fonction des variables explicatives. La comparaison du pouvoir explicatif entre l’ACR et l’ARM a été motivée par l’utilisation extensive de l’ACR afin d’étudier la diversité bêta. Toujours dans une optique explicative, nous définissons une nouvelle procédure appelée l’arbre de régression multivariable en cascade (ARMC) qui permet de construire un modèle tout en imposant un ordre hiérarchique aux hypothèses à l’étude. Cette nouvelle procédure permet d’entreprendre l’étude de l’effet hiérarchisé de deux jeux de variables explicatives, principal et subordonné, puis de calculer leur pouvoir explicatif. L’interprétation du modèle final se fait comme dans une MANOVA hiérarchique. On peut trouver dans les résultats de cette analyse des informations supplémentaires quant aux liens qui existent entre la réponse et les variables explicatives, par exemple des interactions entres les deux jeux explicatifs qui n’étaient pas mises en évidence par l’analyse ARM usuelle. D’autre part, on étudie le pouvoir prédictif des modèles linéaires généralisés en modélisant la biomasse de différentes espèces d’arbre tropicaux en fonction de certaines de leurs mesures allométriques. Plus particulièrement, nous examinons la capacité des structures d’erreur gaussienne et gamma à fournir les prédictions les plus précises. Nous montrons que pour une espèce en particulier, le pouvoir prédictif d’un modèle faisant usage de la structure d’erreur gamma est supérieur. Cette étude s’insère dans un cadre pratique et se veut un exemple pour les gestionnaires voulant estimer précisément la capture du carbone par des plantations d’arbres tropicaux. Nos conclusions pourraient faire partie intégrante d’un programme de réduction des émissions de carbone par les changements d’utilisation des terres.
Polarization and correlation phenomena in the radiative electron capture by bare highly-charged ions
Resumo:
In dieser Arbeit wird die Wechselwirkung zwischen einem Photon und einem Elektron im starken Coulombfeld eines Atomkerns am Beispiel des radiativen Elektroneneinfangs beim Stoß hochgeladener Teilchen untersucht. In den letzten Jahren wurde dieser Ladungsaustauschprozess insbesondere für relativistische Ion–Atom–Stöße sowohl experimentell als auch theoretisch ausführlich erforscht. In Zentrum standen dabei haupsächlich die totalen und differentiellen Wirkungsquerschnitte. In neuerer Zeit werden vermehrt Spin– und Polarisationseffekte sowie Korrelationseffekte bei diesen Stoßprozessen diskutiert. Man erwartet, dass diese sehr empfindlich auf relativistische Effekte im Stoß reagieren und man deshalb eine hervorragende Methode zu deren Bestimmung erhält. Darüber hinaus könnten diese Messungen auch indirekt dazu führen, dass man die Polarisation des Ionenstrahls bestimmen kann. Damit würden sich neue experimentelle Möglichkeiten sowohl in der Atom– als auch der Kernphysik ergeben. In dieser Dissertation werden zunächst diese ersten Untersuchungen zu den Spin–, Polarisations– und Korrelationseffekten systematisch zusammengefasst. Die Dichtematrixtheorie liefert hierzu die geeignete Methode. Mit dieser Methode werden dann die allgemeinen Gleichungen für die Zweistufen–Rekombination hergeleitet. In diesem Prozess wird ein Elektron zunächst radiativ in einen angeregten Zustand eingefangen, der dann im zweiten Schritt unter Emission des zweiten (charakteristischen) Photons in den Grundzustand übergeht. Diese Gleichungen können natürlich auf beliebige Mehrstufen– sowie Einstufen–Prozesse erweitert werden. Im direkten Elektroneneinfang in den Grundzustand wurde die ”lineare” Polarisation der Rekombinationsphotonen untersucht. Es wurde gezeigt, dass man damit eine Möglichkeit zur Bestimmung der Polarisation der Teilchen im Eingangskanal des Schwerionenstoßes hat. Rechnungen zur Rekombination bei nackten U92+ Projektilen zeigen z. B., dass die Spinpolarisation der einfallenden Elektronen zu einer Drehung der linearen Polarisation der emittierten Photonen aus der Streuebene heraus führt. Diese Polarisationdrehung kann mit neu entwickelten orts– und polarisationsempfindlichen Festkörperdetektoren gemessen werden. Damit erhält man eine Methode zur Messung der Polarisation der einfallenden Elektronen und des Ionenstrahls. Die K–Schalen–Rekombination ist ein einfaches Beispiel eines Ein–Stufen–Prozesses. Das am besten bekannte Beispiel der Zwei–Stufen–Rekombination ist der Elektroneneinfang in den 2p3/2–Zustand des nackten Ions und anschließendem Lyman–1–Zerfall (2p3/2 ! 1s1/2). Im Rahmen der Dichte–Matrix–Theorie wurden sowohl die Winkelverteilung als auch die lineare Polarisation der charakteristischen Photonen untersucht. Beide (messbaren) Größen werden beträchtlich durch die Interferenz des E1–Kanals (elektrischer Dipol) mit dem viel schwächeren M2–Kanal (magnetischer Quadrupol) beeinflusst. Für die Winkelverteilung des Lyman–1 Zerfalls im Wasserstoff–ähnlichen Uran führt diese E1–M2–Mischung zu einem 30%–Effekt. Die Berücksichtigung dieser Interferenz behebt die bisher vorhandene Diskrepanz von Theorie und Experiment beim Alignment des 2p3/2–Zustands. Neben diesen Ein–Teichen–Querschnitten (Messung des Einfangphotons oder des charakteristischen Photons) wurde auch die Korrelation zwischen den beiden berechnet. Diese Korrelationen sollten in X–X–Koinzidenz–Messungen beobbachtbar sein. Der Schwerpunkt dieser Untersuchungen lag bei der Photon–Photon–Winkelkorrelation, die experimentell am einfachsten zu messen ist. In dieser Arbeit wurden ausführliche Berechnungen der koinzidenten X–X–Winkelverteilungen beim Elektroneneinfang in den 2p3/2–Zustand des nackten Uranions und beim anschließenden Lyman–1–Übergang durchgeführt. Wie bereits erwähnt, hängt die Winkelverteilung des charakteristischen Photons nicht nur vom Winkel des Rekombinationsphotons, sondern auch stark von der Spin–Polarisation der einfallenden Teilchen ab. Damit eröffnet sich eine zweite Möglichkeit zur Messung der Polaristion des einfallenden Ionenstrahls bzw. der einfallenden Elektronen.
Resumo:
Abstract Big data nowadays is a fashionable topic, independently of what people mean when they use this term. But being big is just a matter of volume, although there is no clear agreement in the size threshold. On the other hand, it is easy to capture large amounts of data using a brute force approach. So the real goal should not be big data but to ask ourselves, for a given problem, what is the right data and how much of it is needed. For some problems this would imply big data, but for the majority of the problems much less data will and is needed. In this talk we explore the trade-offs involved and the main problems that come with big data using the Web as case study: scalability, redundancy, bias, noise, spam, and privacy. Speaker Biography Ricardo Baeza-Yates Ricardo Baeza-Yates is VP of Research for Yahoo Labs leading teams in United States, Europe and Latin America since 2006 and based in Sunnyvale, California, since August 2014. During this time he has lead the labs in Barcelona and Santiago de Chile. Between 2008 and 2012 he also oversaw the Haifa lab. He is also part time Professor at the Dept. of Information and Communication Technologies of the Universitat Pompeu Fabra, in Barcelona, Spain. During 2005 he was an ICREA research professor at the same university. Until 2004 he was Professor and before founder and Director of the Center for Web Research at the Dept. of Computing Science of the University of Chile (in leave of absence until today). He obtained a Ph.D. in CS from the University of Waterloo, Canada, in 1989. Before he obtained two masters (M.Sc. CS & M.Eng. EE) and the electronics engineer degree from the University of Chile in Santiago. He is co-author of the best-seller Modern Information Retrieval textbook, published in 1999 by Addison-Wesley with a second enlarged edition in 2011, that won the ASIST 2012 Book of the Year award. He is also co-author of the 2nd edition of the Handbook of Algorithms and Data Structures, Addison-Wesley, 1991; and co-editor of Information Retrieval: Algorithms and Data Structures, Prentice-Hall, 1992, among more than 500 other publications. From 2002 to 2004 he was elected to the board of governors of the IEEE Computer Society and in 2012 he was elected for the ACM Council. He has received the Organization of American States award for young researchers in exact sciences (1993), the Graham Medal for innovation in computing given by the University of Waterloo to distinguished ex-alumni (2007), the CLEI Latin American distinction for contributions to CS in the region (2009), and the National Award of the Chilean Association of Engineers (2010), among other distinctions. In 2003 he was the first computer scientist to be elected to the Chilean Academy of Sciences and since 2010 is a founding member of the Chilean Academy of Engineering. In 2009 he was named ACM Fellow and in 2011 IEEE Fellow.
Resumo:
Title: Data-Driven Text Generation using Neural Networks Speaker: Pavlos Vougiouklis, University of Southampton Abstract: Recent work on neural networks shows their great potential at tackling a wide variety of Natural Language Processing (NLP) tasks. This talk will focus on the Natural Language Generation (NLG) problem and, more specifically, on the extend to which neural network language models could be employed for context-sensitive and data-driven text generation. In addition, a neural network architecture for response generation in social media along with the training methods that enable it to capture contextual information and effectively participate in public conversations will be discussed. Speaker Bio: Pavlos Vougiouklis obtained his 5-year Diploma in Electrical and Computer Engineering from the Aristotle University of Thessaloniki in 2013. He was awarded an MSc degree in Software Engineering from the University of Southampton in 2014. In 2015, he joined the Web and Internet Science (WAIS) research group of the University of Southampton and he is currently working towards the acquisition of his PhD degree in the field of Neural Network Approaches for Natural Language Processing. Title: Provenance is Complicated and Boring — Is there a solution? Speaker: Darren Richardson, University of Southampton Abstract: Paper trails, auditing, and accountability — arguably not the sexiest terms in computer science. But then you discover that you've possibly been eating horse-meat, and the importance of provenance becomes almost palpable. Having accepted that we should be creating provenance-enabled systems, the challenge of then communicating that provenance to casual users is not trivial: users should not have to have a detailed working knowledge of your system, and they certainly shouldn't be expected to understand the data model. So how, then, do you give users an insight into the provenance, without having to build a bespoke system for each and every different provenance installation? Speaker Bio: Darren is a final year Computer Science PhD student. He completed his undergraduate degree in Electronic Engineering at Southampton in 2012.
Resumo:
L'Estany de Banyoles, sistema peculiar tant des del punt de vista de la seva formació geològica com de les seves característiques limnològiques, conté actualment una comunitat de peixos profundament modificada respecte de la comunitat original. La perca americana (Micropterus salmoides), introduïda a finals dels anys seixanta del segle XX, és avui una de les espècies dominants en aquesta comunitat, i ocupa sobretot l'hàbitat litoral de l'Estany. Es tracta d'una espècie molt ben estudiada a Nord Amèrica des de diverses disciplines de la biologia i des de fa diverses dècades, cosa que ha comportat que actualment es disposi d'un gran volum d'informació sobre ella. Amb tot, fora del seu continent d'origen ha rebut poca atenció, malgrat l'amplia expansió que ha experimentat arreu del món. En aquesta tesi doctoral s'han abordat, amb un enfocament descriptiu, aspectes fins ara desconeguts per a l'espècie a l'Estany de Banyoles, a la península ibèrica i fins i tot a Europa. Concretament, se n'ha analitzat la condició, el creixement i la demografia, així com les seves variacions temporals. Amb aquesta finalitat, s'ha dissenyat un mostreig composat de deu campanyes de pesca intensives més alguns petits mostrejos addicionals intercalats, mostreig que s'ha allargat des del juliol del 1997 i fins el novembre del 1999. La captura dels exemplars s'ha realitzat mitjançant una tècnica de pesca elèctrica amb una embarcació posada a punt expressament per a aquest estudi, la qual s'ha mostrat considerablement eficient malgrat les dificultats que ofereix el medi. S'ha realitzat un mostreig de marcatge-recaptura basat en la mutilació d'aletes i, en alguns casos, en el marcatge amb pintura acrílica. Només en la darrera campanya (novembre del 1999) s'ha sacrificat una part important de les captures a fi de retirar-ne els otòlits per a la determinació de l'edat. Pel que fa a l'anàlisi de les dades, s'ha aplicat un ampli ventall de mètodes i models per a cada un dels aspectes estudiats, a fi de contrastar-ne els resultats i validar-ne la seva fiabilitat. En el cas de la condició, s'han aplicat mètodes d'anàlisi de la covariància (ANCOVA) i altres mètodes anàlegs, així com, paral·lelament, regressions i anàlisis derivades a partir de la relació longitud-pes. En l'estudi del creixement, s'han realitzat ajustaments de diversos models mitjançant regressions sobre dades de mida a l'edat i sobre dades d'increments de mida observats per interval de temps. També s'han aplicat anàlisis de freqüències de longitud, i, finalment, s'han aplicat mètodes de retrocàlcul a partir dels increments anuals del radi observats en els otòlits. Finalment, en el cas de l'estudi de la demografia, s'han aplicat models de marcatge-recaptura per a l'estimació de la grandària poblacional i de la supervivència, i, a més, s'han ajustat diversos models continus de supervivència sobre aquestes estimacions prèvies. També s'han estimat les capturabilitats associades a la nova tècnica de captura. Per una altra banda, s'ha implementat i realitzat un mostreig sobre la població de pescadors esportius de l'Estany encarat a determinar, bàsicament, la pressió de pesca a què es veu sotmesa l'espècie. Els resultats mostren sobretot una alta estabilitat interanual en tots els aspectes estudiats, que s'explica per l'estabilitat ambiental que, al seu torn, és característica d'aquest ecosistema lacustre. Això reverteix en una longevitat màxima observada que iguala la màxima descrita a la literatura per a l'espècie. Alhora, també s'han descrit fortes oscil·lacions estacionals tant en la condició, com en el creixement, com també en la supervivència, les quals, però, presenten certes diferències en la seva temporalitat, cosa que indica una certa diferenciació en els factors que les regulen.
Resumo:
Snakes are thought as fear-relevant stimuli (biologically prepared to be associated with fear) which can lead to an enhanced attentional capture when compared fear-irrelevant stimuli. Inherent limitations related to the key-press behaviour might be bypassed with the measurement of eye movements, since they are more closely related to attentional processes than reaction times. An eye tracking technique was combined with the flicker paradigm in two studies. A sample of university students was gathered. In both studies, an instruction to detect changes between the pair of scenes was given. Attentional orienting for the changing element in the scene was analyzed, as well the role of fear of snakes as a moderator variable. The results for both studies revealed a significant shorter time to first fixation for snake stimuli when compared to control stimuli. A facilitating effect of fear of snakes was also found for snakes, presenting the highly fear participants a shorter a time to first fixation for snake stimuli when compared to low-feared participants. The results are in line with current research that supports the advantage of snakes to grab attention due their evo-biological significance.
Resumo:
The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.
Resumo:
Two wavelet-based control variable transform schemes are described and are used to model some important features of forecast error statistics for use in variational data assimilation. The first is a conventional wavelet scheme and the other is an approximation of it. Their ability to capture the position and scale-dependent aspects of covariance structures is tested in a two-dimensional latitude-height context. This is done by comparing the covariance structures implied by the wavelet schemes with those found from the explicit forecast error covariance matrix, and with a non-wavelet- based covariance scheme used currently in an operational assimilation scheme. Qualitatively, the wavelet-based schemes show potential at modeling forecast error statistics well without giving preference to either position or scale-dependent aspects. The degree of spectral representation can be controlled by changing the number of spectral bands in the schemes, and the least number of bands that achieves adequate results is found for the model domain used. Evidence is found of a trade-off between the localization of features in positional and spectral spaces when the number of bands is changed. By examining implied covariance diagnostics, the wavelet-based schemes are found, on the whole, to give results that are closer to diagnostics found from the explicit matrix than from the nonwavelet scheme. Even though the nature of the covariances has the right qualities in spectral space, variances are found to be too low at some wavenumbers and vertical correlation length scales are found to be too long at most scales. The wavelet schemes are found to be good at resolving variations in position and scale-dependent horizontal length scales, although the length scales reproduced are usually too short. The second of the wavelet-based schemes is often found to be better than the first in some important respects, but, unlike the first, it has no exact inverse transform.
Resumo:
Current changes in the tropical hydrological cycle, including water vapour and precipitation, are presented over the period 1979-2008 based on a diverse suite of observational datasets and atmosphere-only climate models. Models capture the observed variability in tropical moisture while reanalyses cannot. Observed variability in precipitation is highly dependent upon the satellite instruments employed and only cursory agreement with model simulations, primarily relating to the interannual variability associated with the El Niño Southern Oscillation. All datasets display a positive relationship between precipitation and surface temperature but with a large spread. The tendency for wet, ascending regions to become wetter at the expense of dry, descending regimes is in general reproduced. Finally, the frequency of extreme precipitation is shown to rise with warming in the observations and for the model ensemble mean but with large spread in the model simulations. The influence of the Earth’s radiative energy balance in relation to changes in the tropical water cycle are discussed
Resumo:
1. Suction sampling is a popular method for the collection of quantitative data on grassland invertebrate populations, although there have been no detailed studies into the effectiveness of the method. 2. We investigate the effect of effort (duration and number of suction samples) and sward height on the efficiency of suction sampling of grassland beetle, true bug, planthopper and spider Populations. We also compare Suction sampling with an absolute sampling method based on the destructive removal of turfs. 3. Sampling for durations of 16 seconds was sufficient to collect 90% of all individuals and species of grassland beetles, with less time required for the true bugs, spiders and planthoppers. The number of samples required to collect 90% of the species was more variable, although in general 55 sub-samples was sufficient for all groups, except the true bugs. Increasing sward height had a negative effect on the capture efficiency of suction sampling. 4. The assemblage structure of beetles, planthoppers and spiders was independent of the sampling method (suction or absolute) used. 5. Synthesis and applications. In contrast to other sampling methods used in grassland habitats (e.g. sweep netting or pitfall trapping), suction sampling is an effective quantitative tool for the measurement of invertebrate diversity and assemblage structure providing sward height is included as a covariate. The effective sampling of beetles, true bugs, planthoppers and spiders altogether requires a minimum sampling effort of 110 sub-samples of duration of 16 seconds. Such sampling intensities can be adjusted depending on the taxa sampled, and we provide information to minimize sampling problems associated with this versatile technique. Suction sampling should remain an important component in the toolbox of experimental techniques used during both experimental and management sampling regimes within agroecosystems, grasslands or other low-lying vegetation types.
Resumo:
1. We studied a reintroduced population of the formerly critically endangered Mauritius kestrel Falco punctatus Temmink from its inception in 1987 until 2002, by which time the population had attained carrying capacity for the study area. Post-1994 the population received minimal management other than the provision of nestboxes. 2. We analysed data collected on survival (1987-2002) using program MARK to explore the influence of density-dependent and independent processes on survival over the course of the population's development. 3.We found evidence for non-linear, threshold density dependence in juvenile survival rates. Juvenile survival was also strongly influenced by climate, with the temporal distribution of rainfall during the cyclone season being the most influential climatic variable. Adult survival remained constant throughout. 4. Our most parsimonious capture-mark-recapture statistical model, which was constrained by density and climate, explained 75.4% of the temporal variation exhibited in juvenile survival rates over the course of the population's development. 5. This study is an example of how data collected as part of a threatened species recovery programme can be used to explore the role and functional form of natural population regulatory processes. With the improvements in conservation management techniques and the resulting success stories, formerly threatened species offer unique opportunities to further our understanding of the fundamental principles of population ecology.
Resumo:
Variational data assimilation systems for numerical weather prediction rely on a transformation of model variables to a set of control variables that are assumed to be uncorrelated. Most implementations of this transformation are based on the assumption that the balanced part of the flow can be represented by the vorticity. However, this assumption is likely to break down in dynamical regimes characterized by low Burger number. It has recently been proposed that a variable transformation based on potential vorticity should lead to control variables that are uncorrelated over a wider range of regimes. In this paper we test the assumption that a transform based on vorticity and one based on potential vorticity produce an uncorrelated set of control variables. Using a shallow-water model we calculate the correlations between the transformed variables in the different methods. We show that the control variables resulting from a vorticity-based transformation may retain large correlations in some dynamical regimes, whereas a potential vorticity based transformation successfully produces a set of uncorrelated control variables. Calculations of spatial correlations show that the benefit of the potential vorticity transformation is linked to its ability to capture more accurately the balanced component of the flow.
Resumo:
Ice cloud representation in general circulation models remains a challenging task, due to the lack of accurate observations and the complexity of microphysical processes. In this article, we evaluate the ice water content (IWC) and ice cloud fraction statistical distributions from the numerical weather prediction models of the European Centre for Medium-Range Weather Forecasts (ECMWF) and the UK Met Office, exploiting the synergy between the CloudSat radar and CALIPSO lidar. Using the last three weeks of July 2006, we analyse the global ice cloud occurrence as a function of temperature and latitude and show that the models capture the main geographical and temperature-dependent distributions, but overestimate the ice cloud occurrence in the Tropics in the temperature range from −60 °C to −20 °C and in the Antarctic for temperatures higher than −20 °C, but underestimate ice cloud occurrence at very low temperatures. A global statistical comparison of the occurrence of grid-box mean IWC at different temperatures shows that both the mean and range of IWC increases with increasing temperature. Globally, the models capture most of the IWC variability in the temperature range between −60 °C and −5 °C, and also reproduce the observed latitudinal dependencies in the IWC distribution due to different meteorological regimes. Two versions of the ECMWF model are assessed. The recent operational version with a diagnostic representation of precipitating snow and mixed-phase ice cloud fails to represent the IWC distribution in the −20 °C to 0 °C range, but a new version with prognostic variables for liquid water, ice and snow is much closer to the observed distribution. The comparison of models and observations provides a much-needed analysis of the vertical distribution of IWC across the globe, highlighting the ability of the models to reproduce much of the observed variability as well as the deficiencies where further improvements are required.