913 resultados para DoS-resistant Protocol, SSL and HIP Model in CPN, CPN Simulation and Verification


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In den letzten Jahrzehnten haben sich makroskalige hydrologische Modelle als wichtige Werkzeuge etabliert um den Zustand der globalen erneuerbaren Süßwasserressourcen flächendeckend bewerten können. Sie werden heutzutage eingesetzt um eine große Bandbreite wissenschaftlicher Fragestellungen zu beantworten, insbesondere hinsichtlich der Auswirkungen anthropogener Einflüsse auf das natürliche Abflussregime oder der Auswirkungen des globalen Wandels und Klimawandels auf die Ressource Wasser. Diese Auswirkungen lassen sich durch verschiedenste wasserbezogene Kenngrößen abschätzen, wie z.B. erneuerbare (Grund-)Wasserressourcen, Hochwasserrisiko, Dürren, Wasserstress und Wasserknappheit. Die Weiterentwicklung makroskaliger hydrologischer Modelle wurde insbesondere durch stetig steigende Rechenkapazitäten begünstigt, aber auch durch die zunehmende Verfügbarkeit von Fernerkundungsdaten und abgeleiteten Datenprodukten, die genutzt werden können, um die Modelle anzutreiben und zu verbessern. Wie alle makro- bis globalskaligen Modellierungsansätze unterliegen makroskalige hydrologische Simulationen erheblichen Unsicherheiten, die (i) auf räumliche Eingabedatensätze, wie z.B. meteorologische Größen oder Landoberflächenparameter, und (ii) im Besonderen auf die (oftmals) vereinfachte Abbildung physikalischer Prozesse im Modell zurückzuführen sind. Angesichts dieser Unsicherheiten ist es unabdingbar, die tatsächliche Anwendbarkeit und Prognosefähigkeit der Modelle unter diversen klimatischen und physiographischen Bedingungen zu überprüfen. Bisher wurden die meisten Evaluierungsstudien jedoch lediglich in wenigen, großen Flusseinzugsgebieten durchgeführt oder fokussierten auf kontinentalen Wasserflüssen. Dies steht im Kontrast zu vielen Anwendungsstudien, deren Analysen und Aussagen auf simulierten Zustandsgrößen und Flüssen in deutlich feinerer räumlicher Auflösung (Gridzelle) basieren. Den Kern der Dissertation bildet eine umfangreiche Evaluierung der generellen Anwendbarkeit des globalen hydrologischen Modells WaterGAP3 für die Simulation von monatlichen Abflussregimen und Niedrig- und Hochwasserabflüssen auf Basis von mehr als 2400 Durchflussmessreihen für den Zeitraum 1958-2010. Die betrachteten Flusseinzugsgebiete repräsentieren ein breites Spektrum klimatischer und physiographischer Bedingungen, die Einzugsgebietsgröße reicht von 3000 bis zu mehreren Millionen Quadratkilometern. Die Modellevaluierung hat dabei zwei Zielsetzungen: Erstens soll die erzielte Modellgüte als Bezugswert dienen gegen den jegliche weiteren Modellverbesserungen verglichen werden können. Zweitens soll eine Methode zur diagnostischen Modellevaluierung entwickelt und getestet werden, die eindeutige Ansatzpunkte zur Modellverbesserung aufzeigen soll, falls die Modellgüte unzureichend ist. Hierzu werden komplementäre Modellgütemaße mit neun Gebietsparametern verknüpft, welche die klimatischen und physiographischen Bedingungen sowie den Grad anthropogener Beeinflussung in den einzelnen Einzugsgebieten quantifizieren. WaterGAP3 erzielt eine mittlere bis hohe Modellgüte für die Simulation von sowohl monatlichen Abflussregimen als auch Niedrig- und Hochwasserabflüssen, jedoch sind für alle betrachteten Modellgütemaße deutliche räumliche Muster erkennbar. Von den neun betrachteten Gebietseigenschaften weisen insbesondere der Ariditätsgrad und die mittlere Gebietsneigung einen starken Einfluss auf die Modellgüte auf. Das Modell tendiert zur Überschätzung des jährlichen Abflussvolumens mit steigender Aridität. Dieses Verhalten ist charakteristisch für makroskalige hydrologische Modelle und ist auf die unzureichende Abbildung von Prozessen der Abflussbildung und –konzentration in wasserlimitierten Gebieten zurückzuführen. In steilen Einzugsgebieten wird eine geringe Modellgüte hinsichtlich der Abbildung von monatlicher Abflussvariabilität und zeitlicher Dynamik festgestellt, die sich auch in der Güte der Niedrig- und Hochwassersimulation widerspiegelt. Diese Beobachtung weist auf notwendige Modellverbesserungen in Bezug auf (i) die Aufteilung des Gesamtabflusses in schnelle und verzögerte Abflusskomponente und (ii) die Berechnung der Fließgeschwindigkeit im Gerinne hin. Die im Rahmen der Dissertation entwickelte Methode zur diagnostischen Modellevaluierung durch Verknüpfung von komplementären Modellgütemaßen und Einzugsgebietseigenschaften wurde exemplarisch am Beispiel des WaterGAP3 Modells erprobt. Die Methode hat sich als effizientes Werkzeug erwiesen, um räumliche Muster in der Modellgüte zu erklären und Defizite in der Modellstruktur zu identifizieren. Die entwickelte Methode ist generell für jedes hydrologische Modell anwendbar. Sie ist jedoch insbesondere für makroskalige Modelle und multi-basin Studien relevant, da sie das Fehlen von feldspezifischen Kenntnissen und gezielten Messkampagnen, auf die üblicherweise in der Einzugsgebietsmodellierung zurückgegriffen wird, teilweise ausgleichen kann.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As the number of processors in distributed-memory multiprocessors grows, efficiently supporting a shared-memory programming model becomes difficult. We have designed the Protocol for Hierarchical Directories (PHD) to allow shared-memory support for systems containing massive numbers of processors. PHD eliminates bandwidth problems by using a scalable network, decreases hot-spots by not relying on a single point to distribute blocks, and uses a scalable amount of space for its directories. PHD provides a shared-memory model by synthesizing a global shared memory from the local memories of processors. PHD supports sequentially consistent read, write, and test- and-set operations. This thesis also introduces a method of describing locality for hierarchical protocols and employs this method in the derivation of an abstract model of the protocol behavior. An embedded model, based on the work of Johnson[ISCA19], describes the protocol behavior when mapped to a k-ary n-cube. The thesis uses these two models to study the average height in the hierarchy that operations reach, the longest path messages travel, the number of messages that operations generate, the inter-transaction issue time, and the protocol overhead for different locality parameters, degrees of multithreading, and machine sizes. We determine that multithreading is only useful for approximately two to four threads; any additional interleaving does not decrease the overall latency. For small machines and high locality applications, this limitation is due mainly to the length of the running threads. For large machines with medium to low locality, this limitation is due mainly to the protocol overhead being too large. Our study using the embedded model shows that in situations where the run length between references to shared memory is at least an order of magnitude longer than the time to process a single state transition in the protocol, applications exhibit good performance. If separate controllers for processing protocol requests are included, the protocol scales to 32k processor machines as long as the application exhibits hierarchical locality: at least 22% of the global references must be able to be satisfied locally; at most 35% of the global references are allowed to reach the top level of the hierarchy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present success in the manufacture of multi-layer interconnects in ultra-large-scale integration is largely due to the acceptable planarization capabilities of the chemical-mechanical polishing (CMP) process. In the past decade, copper has emerged as the preferred interconnect material. The greatest challenge in Cu CMP at present is the control of wafer surface non-uniformity at various scales. As the size of a wafer has increased to 300 mm, the wafer-level non-uniformity has assumed critical importance. Moreover, the pattern geometry in each die has become quite complex due to a wide range of feature sizes and multi-level structures. Therefore, it is important to develop a non-uniformity model that integrates wafer-, die- and feature-level variations into a unified, multi-scale dielectric erosion and Cu dishing model. In this paper, a systematic way of characterizing and modeling dishing in the single-step Cu CMP process is presented. The possible causes of dishing at each scale are identified in terms of several geometric and process parameters. The feature-scale pressure calculation based on the step-height at each polishing stage is introduced. The dishing model is based on pad elastic deformation and the evolving pattern geometry, and is integrated with the wafer- and die-level variations. Experimental and analytical means of determining the model parameters are outlined and the model is validated by polishing experiments on patterned wafers. Finally, practical approaches for minimizing Cu dishing are suggested.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To establish a prediction model of the degree of disability in adults with Spinal CordInjury (SCI ) based on the use of the WHO-DAS II . Methods: The disability degree was correlatedwith three variable groups: clinical, sociodemographic and those related with rehabilitation services.A model of multiple linear regression was built to predict disability. 45 people with sci exhibitingdiverse etiology, neurological level and completeness participated. Patients were older than 18 andthey had more than a six-month post-injury. The WHO-DAS II and the ASIA impairment scale(AIS ) were used. Results: Variables that evidenced a significant relationship with disability were thefollowing: occupational situation, type of affiliation to the public health care system, injury evolutiontime, neurological level, partial preservation zone, ais motor and sensory scores and number ofclinical complications during the last year. Complications significantly associated to disability werejoint pain, urinary infections, intestinal problems and autonomic disreflexia. None of the variablesrelated to rehabilitation services showed significant association with disability. The disability degreeexhibited significant differences in favor of the groups that received the following services: assistivedevices supply and vocational, job or educational counseling. Conclusions: The best predictiondisability model in adults with sci with more than six months post-injury was built with variablesof injury evolution time, AIS sensory score and injury-related unemployment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers an overlapping generations model in which capital investment is financed in a credit market with adverse selection. Lenders’ inability to commit ex-ante not to bailout ex-post, together with a wealthy position of entrepreneurs gives rise to the soft budget constraint syndrome, i.e. the absence of liquidation of poor performing firms on a regular basis. This problem arises endogenously as a result of the interaction between the economic behavior of agents, without relying on political economy explanations. We found the problem more binding along the business cycle, providing an explanation to creditors leniency during booms in some LatinAmerican countries in the late seventies and early nineties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The activated sludge and anaerobic digestion processes have been modelled in widely accepted models. Nevertheless, these models still have limitations when describing operational problems of microbiological origin. The aim of this thesis is to develop a knowledge-based model to simulate risk of plant-wide operational problems of microbiological origin.For the risk model heuristic knowledge from experts and literature was implemented in a rule-based system. Using fuzzy logic, the system can infer a risk index for the main operational problems of microbiological origin (i.e. filamentous bulking, biological foaming, rising sludge and deflocculation). To show the results of the risk model, it was implemented in the Benchmark Simulation Models. This allowed to study the risk model's response in different scenarios and control strategies. The risk model has shown to be really useful providing a third criterion to evaluate control strategies apart from the economical and environmental criteria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Preferred structures in the surface pressure variability are investigated in and compared between two 100-year simulations of the Hadley Centre climate model HadCM3. In the first (control) simulation, the model is forced with pre-industrial carbon dioxide concentration (1×CO2) and in the second simulation the model is forced with doubled CO2 concentration (2×CO2). Daily winter (December-January-February) surface pressures over the Northern Hemisphere are analysed. The identification of preferred patterns is addressed using multivariate mixture models. For the control simulation, two significant flow regimes are obtained at 5% and 2.5% significance levels within the state space spanned by the leading two principal components. They show a high pressure centre over the North Pacific/Aleutian Islands associated with a low pressure centre over the North Atlantic, and its reverse. For the 2×CO2 simulation, no such behaviour is obtained. At higher-dimensional state space, flow patterns are obtained from both simulations. They are found to be significant at the 1% level for the control simulation and at the 2.5% level for the 2×CO2 simulation. Hence under CO2 doubling, regime behaviour in the large-scale wave dynamics weakens. Doubling greenhouse gas concentration affects both the frequency of occurrence of regimes and also the pattern structures. The less frequent regime becomes amplified and the more frequent regime weakens. The largest change is observed over the Pacific where a significant deepening of the Aleutian low is obtained under CO2 doubling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss and test the potential usefulness of single-column models (SCMs) for the testing of stochastic physics schemes that have been proposed for use in general circulation models (GCMs). We argue that although single column tests cannot be definitive in exposing the full behaviour of a stochastic method in the full GCM, and although there are differences between SCM testing of deterministic and stochastic methods, SCM testing remains a useful tool. It is necessary to consider an ensemble of SCM runs produced by the stochastic method. These can be usefully compared to deterministic ensembles describing initial condition uncertainty and also to combinations of these (with structural model changes) into poor man's ensembles. The proposed methodology is demonstrated using an SCM experiment recently developed by the GCSS (GEWEX Cloud System Study) community, simulating transitions between active and suppressed periods of tropical convection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of the study was to establish and verify a predictive vegetation model for plant community distribution in the alti-Mediterranean zone of the Lefka Ori massif, western Crete. Based on previous work three variables were identified as significant determinants of plant community distribution, namely altitude, slope angle and geomorphic landform. The response of four community types against these variables was tested using classification trees analysis in order to model community type occurrence. V-fold cross-validation plots were used to determine the length of the best fitting tree. The final 9node tree selected, classified correctly 92.5% of the samples. The results were used to provide decision rules for the construction of a spatial model for each community type. The model was implemented within a Geographical Information System (GIS) to predict the distribution of each community type in the study site. The evaluation of the model in the field using an error matrix gave an overall accuracy of 71%. The user's accuracy was higher for the Crepis-Cirsium (100%) and Telephium-Herniaria community type (66.7%) and relatively lower for the Peucedanum-Alyssum and Dianthus-Lomelosia community types (63.2% and 62.5%, respectively). Misclassification and field validation points to the need for improved geomorphological mapping and suggests the presence of transitional communities between existing community types.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Virtual learning environments (VLEs) would appear to be particular effective in computer-supported collaborative work (CSCW) for active learning. Most research studies looking at computer-supported collaborative design have focused on either synchronous or asynchronous modes of communication, but near-synchronous working has received relatively little attention. Yet it could be argued that near-synchronous communication encourages creative, rhetorical and critical exchanges of ideas, building on each other’s contributions. Furthermore, although many researchers have carried out studies on collaborative design protocol, argumentation and constructive interaction, little is known about the interaction between drawing and dialogue in near-synchronous collaborative design. The paper reports the first stage of an investigation into the requirements for the design and development of interactive systems to support the learning of collaborative design activities. The aim of the study is to understand the collaborative design processes while sketching in a shared white board and audio conferencing media. Empirical data on design processes have been obtained from observation of seven sessions with groups of design students solving an interior space-planning problem of a lounge-diner in a virtual learning environment, Lyceum, an in-house software developed by the Open University to support its students in collaborative learning.