904 resultados para encoding of measurement streams
Resumo:
The manufacturing process of components of Pressure Vessels has a great importance in the efficiency during the operation and life cycle of the equipment. Taking this into account, the objective of this dissertation was to analyze the methods to determine the strain rate in formed components by measuring the components themselves, and posterior results comparison with the values found in manufacturing standards of Pressure vessels. In this study the whole manufacturing process of a component from a pressure vessel, known as Head or End, was accompanied. Using the methodology obtained from literatures it was possible to execute the relative and logarithmic deformation measurements of these components and compare with the obtained results by means of equations presented in the Standards as AD-Merkblatt and ASME (for pressure vessels). The found results were also compared with the logarithmic methodology, taking into account the deformation of the empirical mesh and the thickness of the components studied. It is possible to conclude from this study that despite the existence of empirical methods of measurement of strain rate in components plastically formed, it is recommended the adoption in all situation of the component manufacturing standard. It can be noticed and explained during the development of this study and through the results found
Resumo:
This work talks about the importance of the steel adhesion to concrete, reinforced concrete structures, their testing and forms of measurement used and how the results are used in the structural design, which uses as a reference the NBR 6118 and NBR 7480. It also deals with the importance of a reliable assessment, discussing failures that the test used, NBR 7477, features in specific circumstances, as discussed under the EC-094. It also presents an initial proposal for simplified pullout test, which aims to analyze the grip on steel bars with diameter less than 10 mm, which results when used standardized testing by ABNT NBT 7477, has shown excessive variability, the main criticism of the trial today used without distinction to be established thick or thin bars. The proposal also presented methodology is in the consolidation phase, given the time and the volume of necessary tests. For this reason it is not possible to make a comparison between the values obtained and the results of the test used on the basis of the standard in force, this can only be done once the front with a continuation of this work
Resumo:
The manufacturing process of components of Pressure Vessels has a great importance in the efficiency during the operation and life cycle of the equipment. Taking this into account, the objective of this dissertation was to analyze the methods to determine the strain rate in formed components by measuring the components themselves, and posterior results comparison with the values found in manufacturing standards of Pressure vessels. In this study the whole manufacturing process of a component from a pressure vessel, known as Head or End, was accompanied. Using the methodology obtained from literatures it was possible to execute the relative and logarithmic deformation measurements of these components and compare with the obtained results by means of equations presented in the Standards as AD-Merkblatt and ASME (for pressure vessels). The found results were also compared with the logarithmic methodology, taking into account the deformation of the empirical mesh and the thickness of the components studied. It is possible to conclude from this study that despite the existence of empirical methods of measurement of strain rate in components plastically formed, it is recommended the adoption in all situation of the component manufacturing standard. It can be noticed and explained during the development of this study and through the results found
Resumo:
Os programas de transferência condicionada de renda (TCR) entram na agenda pública por sua potencialidade em interferir no ciclo intergeracional de pobreza. Este artigo tem como objetivo analisar o processo de formulação das condicionalidades de saúde do Programa Bolsa Família e, secundariamente, avaliar sua interface com a trajetória das políticas de alimentação e nutrição no Brasil. Para isso, o estudo adotou como referencial analítico o modelo de análise de múltiplos fluxos, proposto por Kingdon, para quem a mudança na agenda pública acontece com a convergência entre o fluxo dos problemas, o fluxo das soluções e alternativas e o fluxo político. A trajetória desses fluxos foi recomposta por meio da análise de documentos governamentais e de relatos orais obtidos por meio de entrevistas. No momento da formulação das condicionalidades de saúde, no fluxo de problemas, havia a necessidade de mudar a estratégia de combate à desnutrição, devido às críticas ao Incentivo ao Combate às Carências Nutricionais (ICCN) e à extinção do Programa de Distribuição de Estoques de Alimentos (PRODEA). No que diz respeito ao fluxo das soluções, diversas propostas de TCR estavam em curso. No fluxo político, havia a decisão de criação de uma rede de proteção social. Nesse processo, a Coordenação Geral da Política de Alimentação e Nutrição assumiu o papel de empreendedora de políticas. A reflexão sobre esse processo ajuda a compreender o papel dos serviços de saúde em um programa de caráter intersetorial.
Resumo:
O objetivo deste estudo foi investigar o envolvimento de recursos da atenção na codificação e manutenção da informação visual e espacial na memória de trabalho. Utilizou-se um paradigma de tarefas duplas em que uma tarefa primária de localização espacial foi realizada simultaneamente a uma tarefa atentiva secundária de discriminação de tons. O desempenho dos participantes (n = 20) na tarefa primária foi afetado pela presença e pela similaridade entre os tons da tarefa secundária, e também, pela instrução de priorizar uma ou outra tarefa. Os resultados indicam que recursos atentivos (do executivo central) estão envolvidos na codificação e na manutenção ativa da informação integrada na memória visuoespacial, assim como na manutenção dos objetivos das tarefas a serem realizadas simultaneamente.
Resumo:
INTRODUCTION: The accurate evaluation of error of measurement (EM) is extremely important as in growth studies as in clinical research, since there are usually quantitatively small changes. In any study it is important to evaluate the EM to validate the results and, consequently, the conclusions. Because of its extreme simplicity, the Dahlberg formula is largely used worldwide, mainly in cephalometric studies. OBJECTIVES: (I) To elucidate the formula proposed by Dahlberg in 1940, evaluating it by comparison with linear regression analysis; (II) To propose a simple methodology to analyze the results, which provides statistical elements to assist researchers in obtaining a consistent evaluation of the EM. METHODS: We applied linear regression analysis, hypothesis tests on its parameters and a formula involving the standard deviation of error of measurement and the measured values. RESULTS AND CONCLUSION: we introduced an error coefficient, which is a proportion related to the scale of observed values. This provides new parameters to facilitate the evaluation of the impact of random errors in the research final results.
Resumo:
O artigo, produzido no âmbito das comemorações dos 80 anos de publicação do Manifesto dos Pioneiros da Educação Nova, interroga-se sobre a atualidade dessa carta. Para tanto, explora as condições históricas de emergência do documento, os significados atribuídos à Escola Nova no Brasil na década de 1930 e as contendas ocorridas na arena educacional no período. Além disso, discorre sobre as especificidades do movimento escolanovista brasileiro, procurando demonstrar que a Escola Nova constituiu-se no país como uma fórmula, com significados múltiplos e distintas apropriações produzidas no entrelaçamento de três vertentes: a pedagógica, a ideológica e a política. No que tange ao primeiro aspecto, a indefinição das fronteiras conceituais permitiu que a expressão Escola Nova aglutinasse diferentes educadores, católicos e liberais, em torno de princípios pedagógicos do ensino ativo. No segundo caso, a fórmula ofereceu-se como meio para a transformação da sociedade, servindo às finalidades divergentes dos grupos em litígio. Já na terceira acepção, tornou-se bandeira política, sendo capturada como signo de renovação do sistema educacional pelo Manifesto e por seus signatários. Assim, o documento emergiu como parte do jogo político pela disputa do controle do Estado e de suas dinâmicas, e, portanto, como elemento de coesão de uma frente de educadores que, a despeito de suas diferenças, articulava-se em torno de alguns objetivos comuns, como laicidade, gratuidade e obrigatoriedade da educação. Ademais, ele também foi representante de um grupo de intelectuais que abraçava um mesmo projeto de nação, ainda que com divergências internas.
Resumo:
[ES] El objetivo fundamental de este trabajo es comparar dos instrumentos de medida del burnout en deportistas: el Inventario de Burnout en Deportistas-Revisado (IBD-R) y el Athlete Burnout Questionnaire (ABQ). Ambos modelos de medida asumen una composición tridimensional del síndrome existiendo supuestamente un paralelismo conceptual entre dimensiones. Los análisis de correlaciones realizados entre subescalas supuestamente equivalentes muestran, no obstante, que sólo existe un buen grado de convergencia entre una de las subescalas (Agotamiento Emocional del IBD-R y Agotamiento Físico y Emocional del ABQ). Otras dos subescalas que deberían converger (Reducida Realización Personal del IBD-R y Reducida Sensación de Logro del ABQ) muestran un grado de convergencia menor del esperado y las subescalas de Despersonalización del IBD-R y Devaluación de la Práctica Deportiva del ABQ apenas evidencian relación. Las disonancias teóricas y psicométricas observadas nos hacen reflexionar acerca de desarrollo de un nuevo modelo que integre los componentes del burnout no convergentes.
Resumo:
Interactive theorem provers (ITP for short) are tools whose final aim is to certify proofs written by human beings. To reach that objective they have to fill the gap between the high level language used by humans for communicating and reasoning about mathematics and the lower level language that a machine is able to “understand” and process. The user perceives this gap in terms of missing features or inefficiencies. The developer tries to accommodate the user requests without increasing the already high complexity of these applications. We believe that satisfactory solutions can only come from a strong synergy between users and developers. We devoted most part of our PHD designing and developing the Matita interactive theorem prover. The software was born in the computer science department of the University of Bologna as the result of composing together all the technologies developed by the HELM team (to which we belong) for the MoWGLI project. The MoWGLI project aimed at giving accessibility through the web to the libraries of formalised mathematics of various interactive theorem provers, taking Coq as the main test case. The motivations for giving life to a new ITP are: • study the architecture of these tools, with the aim of understanding the source of their complexity • exploit such a knowledge to experiment new solutions that, for backward compatibility reasons, would be hard (if not impossible) to test on a widely used system like Coq. Matita is based on the Curry-Howard isomorphism, adopting the Calculus of Inductive Constructions (CIC) as its logical foundation. Proof objects are thus, at some extent, compatible with the ones produced with the Coq ITP, that is itself able to import and process the ones generated using Matita. Although the systems have a lot in common, they share no code at all, and even most of the algorithmic solutions are different. The thesis is composed of two parts where we respectively describe our experience as a user and a developer of interactive provers. In particular, the first part is based on two different formalisation experiences: • our internship in the Mathematical Components team (INRIA), that is formalising the finite group theory required to attack the Feit Thompson Theorem. To tackle this result, giving an effective classification of finite groups of odd order, the team adopts the SSReflect Coq extension, developed by Georges Gonthier for the proof of the four colours theorem. • our collaboration at the D.A.M.A. Project, whose goal is the formalisation of abstract measure theory in Matita leading to a constructive proof of Lebesgue’s Dominated Convergence Theorem. The most notable issues we faced, analysed in this part of the thesis, are the following: the difficulties arising when using “black box” automation in large formalisations; the impossibility for a user (especially a newcomer) to master the context of a library of already formalised results; the uncomfortable big step execution of proof commands historically adopted in ITPs; the difficult encoding of mathematical structures with a notion of inheritance in a type theory without subtyping like CIC. In the second part of the manuscript many of these issues will be analysed with the looking glasses of an ITP developer, describing the solutions we adopted in the implementation of Matita to solve these problems: integrated searching facilities to assist the user in handling large libraries of formalised results; a small step execution semantic for proof commands; a flexible implementation of coercive subtyping allowing multiple inheritance with shared substructures; automatic tactics, integrated with the searching facilities, that generates proof commands (and not only proof objects, usually kept hidden to the user) one of which specifically designed to be user driven.
Resumo:
In the context of “testing laboratory” one of the most important aspect to deal with is the measurement result. Whenever decisions are based on measurement results, it is important to have some indication of the quality of the results. In every area concerning with noise measurement many standards are available but without an expression of uncertainty, it is impossible to judge whether two results are in compliance or not. ISO/IEC 17025 is an international standard related with the competence of calibration and testing laboratories. It contains the requirements that testing and calibration laboratories have to meet if they wish to demonstrate that they operate to a quality system, are technically competent and are able to generate technically valid results. ISO/IEC 17025 deals specifically with the requirements for the competence of laboratories performing testing and calibration and for the reporting of the results, which may or may not contain opinions and interpretations of the results. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. In this point of view, for a testing laboratory performing sound power measurement according to specific ISO standards and European Directives, the measurement of uncertainties is the most important factor to deal with. Sound power level measurement, according to ISO 3744:1994 , performed with a limited number of microphones distributed over a surface enveloping a source is affected by a certain systematic error and a related standard deviation. Making a comparison of measurement carried out with different microphone arrays is difficult because results are affected by systematic errors and standard deviation that are peculiarities of the number of microphones disposed on the surface, their spatial position and the complexity of the sound field. A statistical approach could give an overview of the difference between sound power level evaluated with different microphone arrays and an evaluation of errors that afflict this kind of measurement. Despite the classical approach that tend to follow the ISO GUM this thesis present a different point of view of the problem related to the comparison of result obtained from different microphone arrays.
Resumo:
Premise: In the literary works of our anthropological and cultural imagination, the various languages and the different discursive practices are not necessarily quoted, expressly alluded to or declared through clear expressive mechanisms; instead, they rather constitute a substratum, a background, now consolidated, which with irony and intertextuality shines through the thematic and formal elements of each text. The various contaminations, hybridizations and promptings that we find in the expressive forms, the rhetorical procedures and the linguistic and thematic choices of post-modern literary texts are shaped as fluid and familiar categories. Exchanges and passages are no longer only allowed but also inevitable; the post-modern imagination is made up of an agglomeration of discourses that are no longer really separable, built up from texts that blend and quote one another, composing, each with its own specificities, the great family of the cultural products of our social scenario. A literary work, therefore, is not only a whole phenomenon, delimited hic et nunc by a beginning and an ending, but is a fragment of that complex, dense and boundless network that is given by the continual interrelations between human forms of communication and symbolization. The research hypothesis: A vision is delineated of comparative literature as a discipline attentive to the social contexts in which texts take shape and move and to the media-type consistency that literary phenomena inevitably take on. Hence literature is seen as an open systematicity that chooses to be contaminated by other languages and other discursive practices of an imagination that is more than ever polymorphic and irregular. Inside this interpretative framework the aim is to focus the analysis on the relationship that postmodern literature establishes with advertising discourse. On one side post-modern literature is inserted in the world of communication, loudly asserting the blending and reciprocal contamination of literary modes with media ones, absorbing their languages and signification practices, translating them now into thematic nuclei, motifs and sub-motifs and now into formal expedients and new narrative choices; on the other side advertising is chosen as a signification practice of the media universe, which since the 1960s has actively contributed to shaping the dynamics of our socio-cultural scenarios, in terms which are just as important as those of other discursive practices. Advertising has always been a form of communication and symbolization that draws on the collective imagination – myths, actors and values – turning them into specific narrative programs for its own texts. Hence the aim is to interpret and analyze this relationship both from a strictly thematic perspective – and therefore trying to understand what literature speaks about when it speaks about advertising, and seeking advertising quotations in post-modern fiction – and from a formal perspective, with a search for parallels and discordances between the rhetorical procedures, the languages and the verifiable stylistic choices in the texts of the two different signification practices. The analysis method chosen, for the purpose of constructive multiplication of the perspectives, aims to approach the analytical processes of semiotics, applying, when possible, the instruments of the latter, in order to highlight the thematic and formal relationships between literature and advertising. The corpus: The corpus of the literary texts is made up of various novels and, although attention is focused on the post-modern period, there will also be ineludible quotations from essential authors that with their works prompted various reflections: H. De Balzac, Zola, Fitzgerald, Joyce, Calvino, etc… However, the analysis focuses the corpus on three authors: Don DeLillo, Martin Amis and Aldo Nove, and in particular the followings novels: “Americana” (1971) and “Underworld” (1999) by Don DeLillo, “Money” (1984) by Martin Amis and “Woobinda and other stories without a happy ending” (1996) and “Superwoobinda” (1998) by Aldo Nove. The corpus selection is restricted to these novels for two fundamental reasons: 1. assuming parameters of spatio-temporal evaluation, the texts are representative of different socio-cultural contexts and collective imaginations (from the masterly glimpses of American life by DeLillo, to the examples of contemporary Italian life by Nove, down to the English imagination of Amis) and of different historical moments (the 1970s of DeLillo’s Americana, the 1980s of Amis, down to the 1990s of Nove, decades often used as criteria of division of postmodernism into phases); 2. adopting a perspective of strictly thematic analysis, as mentioned in the research hypothesis, the variations and the constants in the novels (thematic nuclei, topoi, images and narrative developments) frequently speak of advertising and inside the narrative plot they affirm various expressions and realizations of it: value ones, thematic ones, textual ones, urban ones, etc… In these novels the themes and the processes of signification of advertising discourse pervade time, space and the relationships that the narrator character builds around him. We are looking at “particle-characters” whose endless facets attest the influence and contamination of advertising in a large part of the narrative developments of the plot: on everyday life, on the processes of acquisition and encoding of the reality, on ideological and cultural baggage, on the relationships and interchanges with the other characters, etc… Often the characters are victims of the implacable consequentiality of the advertising mechanism, since the latter gets the upper hand over the usual processes of communication, which are overwhelmed by it, wittingly or unwittingly (for example: disturbing openings in which the protagonist kills his or her parents on the basis of a spot, former advertisers that live life codifying it through the commercial mechanisms of products, sons and daughters of advertisers that as children instead of playing outside for whole nights saw tapes of spots.) Hence the analysis arises from the text and aims to show how much the developments and the narrative plots of the novels encode, elaborate and recount the myths, the values and the narrative programs of advertising discourse, transforming them into novel components in their own right. And also starting from the text a socio-cultural reference context is delineated, a collective imagination that is different, now geographically, now historically, and from comparison between them the aim is to deduce the constants, the similarities and the variations in the relationship between literature and advertising.
Resumo:
A very recent and exciting new area of research is the application of Concurrency Theory tools to formalize and analyze biological systems and one of the most promising approach comes from the process algebras (process calculi). A process calculus is a formal language that allows to describe concurrent systems and comes with well-established techniques for quantitative and qualitative analysis. Biological systems can be regarded as concurrent systems and therefore modeled by means of process calculi. In this thesis we focus on the process calculi approach to the modeling of biological systems and investigate, mostly from a theoretical point of view, several promising bio-inspired formalisms: Brane Calculi and k-calculus family. We provide several expressiveness results mostly by means of comparisons between calculi. We provide a lower bound to the computational power of the non Turing complete MDB Brane Calculi by showing an encoding of a simple P-System into MDB. We address the issue of local implementation within the k-calculus family: whether n-way rewrites can be simulated by binary interactions only. A solution introducing divergence is provided and we prove a deterministic solution preserving the termination property is not possible. We use the symmetric leader election problem to test synchronization capabilities within the k-calculus family. Several fragments of the original k-calculus are considered and we prove an impossibility result about encoding n-way synchronization into (n-1)-way synchronization. A similar impossibility result is obtained in a pure computer science context. We introduce CCSn, an extension of CCS with multiple input prefixes and show, using the dining philosophers problem, that there is no reasonable encoding of CCS(n+1) into CCSn.
Resumo:
The aim of this study is the creation of a Historical GIS that spatially reference data retrieved from Italian and Catalan historical sources and records. The generation of locates these metasource was achieved through the integral acquisition of source-oriented records and the insertion of mark-up fields, yet maintaining, where possible, the original encoding of the source documents. In order to standardize the set of information contained in the original documents and thus allow queries to the database, additional fields were introduced. Once the initial phase of data research and analysis was concluded the new virtual source was published online within an open WebGIS source. As a conclusion we have created a dynamic and spatially referenced database of geo-historical information. The configuration of this new source is such to guarantee the best possible accessibility.
Resumo:
Modern Internal Combustion Engines are becoming increasingly complex in terms of their control systems and strategies. The growth of the algorithms’ complexity results in a rise of the number of on-board quantities for control purposes. In order to improve combustion efficiency and, simultaneously, limit the amount of pollutant emissions, the on-board evaluation of two quantities in particular has become essential; namely indicated torque produced by the engine and the angular position where 50% of fuel mass injected over an engine cycle is burned (MFB50). The above mentioned quantities can be evaluated through the measurement of in-cylinder pressure. Nonetheless, at the time being, the installation of in-cylinder pressure sensors on vehicles is extremely uncommon mainly because of measurement reliability and costs. This work illustrates a methodological approach for the estimation of indicated torque and MFB50 that is based on the engine speed fluctuation measurement. This methodology is compatible with the typical on-board application restraints. Moreover, it requires no additional costs since speed can be measured using the system already mounted on the vehicle, which is made of a magnetic pick-up faced to a toothed wheel. The estimation algorithm consists of two main parts: first, the evaluation of indicated torque fluctuation based on speed measurement and secondly, the evaluation of the mean value of the indicated torque (over an engine cycle) and MFB50 by using the relationship with the indicated torque harmonic and other engine quantities. The procedure has been successfully applied to an L4 turbocharged Diesel engine mounted on-board a vehicle.
Resumo:
Die Verifikation numerischer Modelle ist für die Verbesserung der Quantitativen Niederschlagsvorhersage (QNV) unverzichtbar. Ziel der vorliegenden Arbeit ist die Entwicklung von neuen Methoden zur Verifikation der Niederschlagsvorhersagen aus dem regionalen Modell der MeteoSchweiz (COSMO-aLMo) und des Globalmodells des Europäischen Zentrums für Mittelfristvorhersage (engl.: ECMWF). Zu diesem Zweck wurde ein neuartiger Beobachtungsdatensatz für Deutschland mit stündlicher Auflösung erzeugt und angewandt. Für die Bewertung der Modellvorhersagen wurde das neue Qualitätsmaß „SAL“ entwickelt. Der neuartige, zeitlich und räumlich hoch-aufgelöste Beobachtungsdatensatz für Deutschland wird mit der während MAP (engl.: Mesoscale Alpine Program) entwickelten Disaggregierungsmethode erstellt. Die Idee dabei ist, die zeitlich hohe Auflösung der Radardaten (stündlich) mit der Genauigkeit der Niederschlagsmenge aus Stationsmessungen (im Rahmen der Messfehler) zu kombinieren. Dieser disaggregierte Datensatz bietet neue Möglichkeiten für die quantitative Verifikation der Niederschlagsvorhersage. Erstmalig wurde eine flächendeckende Analyse des Tagesgangs des Niederschlags durchgeführt. Dabei zeigte sich, dass im Winter kein Tagesgang existiert und dies vom COSMO-aLMo gut wiedergegeben wird. Im Sommer dagegen findet sich sowohl im disaggregierten Datensatz als auch im COSMO-aLMo ein deutlicher Tagesgang, wobei der maximale Niederschlag im COSMO-aLMo zu früh zwischen 11-14 UTC im Vergleich zu 15-20 UTC in den Beobachtungen einsetzt und deutlich um das 1.5-fache überschätzt wird. Ein neues Qualitätsmaß wurde entwickelt, da herkömmliche, gitterpunkt-basierte Fehlermaße nicht mehr der Modellentwicklung Rechnung tragen. SAL besteht aus drei unabhängigen Komponenten und basiert auf der Identifikation von Niederschlagsobjekten (schwellwertabhängig) innerhalb eines Gebietes (z.B. eines Flusseinzugsgebietes). Berechnet werden Unterschiede der Niederschlagsfelder zwischen Modell und Beobachtungen hinsichtlich Struktur (S), Amplitude (A) und Ort (L) im Gebiet. SAL wurde anhand idealisierter und realer Beispiele ausführlich getestet. SAL erkennt und bestätigt bekannte Modelldefizite wie das Tagesgang-Problem oder die Simulation zu vieler relativ schwacher Niederschlagsereignisse. Es bietet zusätzlichen Einblick in die Charakteristiken der Fehler, z.B. ob es sich mehr um Fehler in der Amplitude, der Verschiebung eines Niederschlagsfeldes oder der Struktur (z.B. stratiform oder kleinskalig konvektiv) handelt. Mit SAL wurden Tages- und Stundensummen des COSMO-aLMo und des ECMWF-Modells verifiziert. SAL zeigt im statistischen Sinne speziell für stärkere (und damit für die Gesellschaft relevante Niederschlagsereignisse) eine im Vergleich zu schwachen Niederschlägen gute Qualität der Vorhersagen des COSMO-aLMo. Im Vergleich der beiden Modelle konnte gezeigt werden, dass im Globalmodell flächigere Niederschläge und damit größere Objekte vorhergesagt werden. Das COSMO-aLMo zeigt deutlich realistischere Niederschlagsstrukturen. Diese Tatsache ist aufgrund der Auflösung der Modelle nicht überraschend, konnte allerdings nicht mit herkömmlichen Fehlermaßen gezeigt werden. Die im Rahmen dieser Arbeit entwickelten Methoden sind sehr nützlich für die Verifikation der QNV zeitlich und räumlich hoch-aufgelöster Modelle. Die Verwendung des disaggregierten Datensatzes aus Beobachtungen sowie SAL als Qualitätsmaß liefern neue Einblicke in die QNV und lassen angemessenere Aussagen über die Qualität von Niederschlagsvorhersagen zu. Zukünftige Anwendungsmöglichkeiten für SAL gibt es hinsichtlich der Verifikation der neuen Generation von numerischen Wettervorhersagemodellen, die den Lebenszyklus hochreichender konvektiver Zellen explizit simulieren.