867 resultados para Convective scheme
Tropical Mesoscale Convective Systems and Associated Energetics : Observational and Modeling Studies
Resumo:
The main purpose of the thesis is to improve the state of knowledge and understanding of the physical structure of the TMCS and its short range prediction. The present study principally addresses the fine structure, dynamics and microphysics of severe convective storms.The structure and dynamics of the Tropical cloud clusters over Indian region is not well understood. The observational cases discussed in the thesis are limited to the temperature and humidity observations. We propose a mesoscale observational network along with all the available Doppler radars and other conventional and non—conventional observations. Simultaneous observations with DWR, VHF and UHF radars of the same cloud system will provide new insight into the dynamics and microphysics of the clouds. More cases have to be studied in detail to obtain climatology of the storm type passing over tropical Indian region. These observational data sets provide wide variety of information to be assimilated to the mesoscale data assimilation system and can be used to force CSRM.The gravity wave generation and stratosphere troposphere exchange (STE) processes associated with convection gained a great deal of attention to modem science and meteorologist. Round the clock observations using VHF and UHF radars along with supplementary data sets like DWR, satellite, GPS/Radiosondes, meteorological rockets and aircrafl observations is needed to explore the role of convection and associated energetics in detail.
Resumo:
n the recent years protection of information in digital form is becoming more important. Image and video encryption has applications in various fields including Internet communications, multimedia systems, medical imaging, Tele-medicine and military communications. During storage as well as in transmission, the multimedia information is being exposed to unauthorized entities unless otherwise adequate security measures are built around the information system. There are many kinds of security threats during the transmission of vital classified information through insecure communication channels. Various encryption schemes are available today to deal with information security issues. Data encryption is widely used to protect sensitive data against the security threat in the form of “attack on confidentiality”. Secure transmission of information through insecure communication channels also requires encryption at the sending side and decryption at the receiving side. Encryption of large text message and image takes time before they can be transmitted, causing considerable delay in successive transmission of information in real-time. In order to minimize the latency, efficient encryption algorithms are needed. An encryption procedure with adequate security and high throughput is sought in multimedia encryption applications. Traditional symmetric key block ciphers like Data Encryption Standard (DES), Advanced Encryption Standard (AES) and Escrowed Encryption Standard (EES) are not efficient when the data size is large. With the availability of fast computing tools and communication networks at relatively lower costs today, these encryption standards appear to be not as fast as one would like. High throughput encryption and decryption are becoming increasingly important in the area of high-speed networking. Fast encryption algorithms are needed in these days for high-speed secure communication of multimedia data. It has been shown that public key algorithms are not a substitute for symmetric-key algorithms. Public key algorithms are slow, whereas symmetric key algorithms generally run much faster. Also, public key systems are vulnerable to chosen plaintext attack. In this research work, a fast symmetric key encryption scheme, entitled “Matrix Array Symmetric Key (MASK) encryption” based on matrix and array manipulations has been conceived and developed. Fast conversion has been achieved with the use of matrix table look-up substitution, array based transposition and circular shift operations that are performed in the algorithm. MASK encryption is a new concept in symmetric key cryptography. It employs matrix and array manipulation technique using secret information and data values. It is a block cipher operated on plain text message (or image) blocks of 128 bits using a secret key of size 128 bits producing cipher text message (or cipher image) blocks of the same size. This cipher has two advantages over traditional ciphers. First, the encryption and decryption procedures are much simpler, and consequently, much faster. Second, the key avalanche effect produced in the ciphertext output is better than that of AES.
Resumo:
Health insurance has become a necessity for the common man, next to food, clothing and shelter. The financing of health expense is either catastrophic or sometimes even frequently contracted illnesses, is a major cause of mental agony for the common man. The cost of care may sometimes result in the complete erosion of the family savings or may even lead to indebtedness as many studies on causes of rural indebtedness bear testimony (Jayalakshmi, 2006). A suitable cover by way of health insurance is all that is required to cope with such situations. Health care insurance rightly provides the mechanism for both individuals and families to mitigate the financial burden of medical expenses in the present context. Hence a well designed affordable health insurance policy is the need of the hour.Therefore, it is very significant to study the extent to which the beneficiaries in Kerala make use of the benefits provided by a social health insurance scheme like RSBY-CHIS. Based on the above pertinent points, this study assumes national relevance even though the geographical area of the study is limited to two districts of Kerala. The findings of the study will bring forth valuable inputs on the services availed by the beneficiaries of RSBYCHIS and take appropriate measures to improve the effectiveness of the scheme whereby maximum quality benefit could be availed by the poorest of the poor and develop the scheme as a real dawn of the new era of health for them
Assessment of Convective Activity Using Stability Indices as Inferred from Radiosonde and MODIS Data
Resumo:
The combined use of both radiosonde data and three-dimensional satellite derived data over ocean and land is useful for a better understanding of atmospheric thermodynamics. Here, an attempt is made to study the ther-modynamic structure of convective atmosphere during pre-monsoon season over southwest peninsular India utilizing satellite derived data and radiosonde data. The stability indices were computed for the selected stations over southwest peninsular India viz: Thiruvananthapuram and Cochin, using the radiosonde data for five pre- monsoon seasons. The stability indices studied for the region are Showalter Index (SI), K Index (KI), Lifted In-dex (LI), Total Totals Index (TTI), Humidity Index (HI), Deep Convective Index (DCI) and thermodynamic pa-rameters such as Convective Available Potential Energy (CAPE) and Convective Inhibition Energy (CINE). The traditional Showalter Index has been modified to incorporate the thermodynamics over tropical region. MODIS data over South Peninsular India is also used for the study. When there is a convective system over south penin-sular India, the value of LI over the region is less than −4. On the other hand, the region where LI is more than 2 is comparatively stable without any convection. Similarly, when KI values are in the range 35 to 40, there is a possibility for convection. The threshold value for TTI is found to be between 50 and 55. Further, we found that prior to convection, dry bulb temperature at 1000, 850, 700 and 500 hPa is minimum and the dew point tem-perature is a maximum, which leads to increase in relative humidity. The total column water vapor is maximum in the convective region and minimum in the stable region. The threshold values for the different stability indices are found to be agreeing with that reported in literature.
Resumo:
Clustering schemes improve energy efficiency of wireless sensor networks. The inclusion of mobility as a new criterion for the cluster creation and maintenance adds new challenges for these clustering schemes. Cluster formation and cluster head selection is done on a stochastic basis for most of the algorithms. In this paper we introduce a cluster formation and routing algorithm based on a mobility factor. The proposed algorithm is compared with LEACH-M protocol based on metrics viz. number of cluster head transitions, average residual energy, number of alive nodes and number of messages lost
Resumo:
Thunderstorm, resulting from vigorous convective activity, is one of the most spectacular weather phenomena in the atmosphere. A common feature of the weather during the pre-monsoon season over the Indo-Gangetic Plain and northeast India is the outburst of severe local convective storms, commonly known as ‘Nor’westers’(as they move from northwest to southeast). The severe thunderstorms associated with thunder, squall lines, lightning and hail cause extensive losses in agricultural, damage to structure and also loss of life. In this paper, sensitivity experiments have been conducted with the Non-hydrostatic Mesoscale Model (NMM) to test the impact of three microphysical schemes in capturing the severe thunderstorm event occurred over Kolkata on 15 May 2009. The results show that the WRF-NMM model with Ferrier microphysical scheme appears to reproduce the cloud and precipitation processes more realistically than other schemes. Also, we have made an attempt to diagnose four severe thunderstorms that occurred during pre-monsoon seasons of 2006, 2007 and 2008 through the simulated radar reflectivity fields from NMM model with Ferrier microphysics scheme and validated the model results with Kolkata Doppler Weather Radar (DWR) observations. Composite radar reflectivity simulated by WRF-NMM model clearly shows the severe thunderstorm movement as observed by DWR imageries, but failed to capture the intensity as in observations. The results of these analyses demonstrated the capability of high resolution WRF-NMM model in the simulation of severe thunderstorm events and determined that the 3 km model improve upon current abilities when it comes to simulating severe thunderstorms over east Indian region
Resumo:
While channel coding is a standard method of improving a system’s energy efficiency in digital communications, its practice does not extend to high-speed links. Increasing demands in network speeds are placing a large burden on the energy efficiency of high-speed links and render the benefit of channel coding for these systems a timely subject. The low error rates of interest and the presence of residual intersymbol interference (ISI) caused by hardware constraints impede the analysis and simulation of coded high-speed links. Focusing on the residual ISI and combined noise as the dominant error mechanisms, this paper analyses error correlation through concepts of error region, channel signature, and correlation distance. This framework provides a deeper insight into joint error behaviours in high-speed links, extends the range of statistical simulation for coded high-speed links, and provides a case against the use of biased Monte Carlo methods in this setting
Resumo:
Coded OFDM is a transmission technique that is used in many practical communication systems. In a coded OFDM system, source data are coded, interleaved and multiplexed for transmission over many frequency sub-channels. In a conventional coded OFDM system, the transmission power of each subcarrier is the same regardless of the channel condition. However, some subcarrier can suffer deep fading with multi-paths and the power allocated to the faded subcarrier is likely to be wasted. In this paper, we compute the FER and BER bounds of a coded OFDM system given as convex functions for a given channel coder, inter-leaver and channel response. The power optimization is shown to be a convex optimization problem that can be solved numerically with great efficiency. With the proposed power optimization scheme, near-optimum power allocation for a given coded OFDM system and channel response to minimize FER or BER under a constant transmission power constraint is obtained
Resumo:
Inhalt dieser Arbeit ist ein Verfahren zur numerischen Lösung der zweidimensionalen Flachwassergleichung, welche das Fließverhalten von Gewässern, deren Oberflächenausdehnung wesentlich größer als deren Tiefe ist, modelliert. Diese Gleichung beschreibt die gravitationsbedingte zeitliche Änderung eines gegebenen Anfangszustandes bei Gewässern mit freier Oberfläche. Diese Klasse beinhaltet Probleme wie das Verhalten von Wellen an flachen Stränden oder die Bewegung einer Flutwelle in einem Fluss. Diese Beispiele zeigen deutlich die Notwendigkeit, den Einfluss von Topographie sowie die Behandlung von Nass/Trockenübergängen im Verfahren zu berücksichtigen. In der vorliegenden Dissertation wird ein, in Gebieten mit hinreichender Wasserhöhe, hochgenaues Finite-Volumen-Verfahren zur numerischen Bestimmung des zeitlichen Verlaufs der Lösung der zweidimensionalen Flachwassergleichung aus gegebenen Anfangs- und Randbedingungen auf einem unstrukturierten Gitter vorgestellt, welches in der Lage ist, den Einfluss topographischer Quellterme auf die Strömung zu berücksichtigen, sowie in sogenannten \glqq lake at rest\grqq-stationären Zuständen diesen Einfluss mit den numerischen Flüssen exakt auszubalancieren. Basis des Verfahrens ist ein Finite-Volumen-Ansatz erster Ordnung, welcher durch eine WENO Rekonstruktion unter Verwendung der Methode der kleinsten Quadrate und eine sogenannte Space Time Expansion erweitert wird mit dem Ziel, ein Verfahren beliebig hoher Ordnung zu erhalten. Die im Verfahren auftretenden Riemannprobleme werden mit dem Riemannlöser von Chinnayya, LeRoux und Seguin von 1999 gelöst, welcher die Einflüsse der Topographie auf den Strömungsverlauf mit berücksichtigt. Es wird in der Arbeit bewiesen, dass die Koeffizienten der durch das WENO-Verfahren berechneten Rekonstruktionspolynome die räumlichen Ableitungen der zu rekonstruierenden Funktion mit einem zur Verfahrensordnung passenden Genauigkeitsgrad approximieren. Ebenso wird bewiesen, dass die Koeffizienten des aus der Space Time Expansion resultierenden Polynoms die räumlichen und zeitlichen Ableitungen der Lösung des Anfangswertproblems approximieren. Darüber hinaus wird die wohlbalanciertheit des Verfahrens für beliebig hohe numerische Ordnung bewiesen. Für die Behandlung von Nass/Trockenübergangen wird eine Methode zur Ordnungsreduktion abhängig von Wasserhöhe und Zellgröße vorgeschlagen. Dies ist notwendig, um in der Rechnung negative Werte für die Wasserhöhe, welche als Folge von Oszillationen des Raum-Zeit-Polynoms auftreten können, zu vermeiden. Numerische Ergebnisse die die theoretische Verfahrensordnung bestätigen werden ebenso präsentiert wie Beispiele, welche die hervorragenden Eigenschaften des Gesamtverfahrens in der Berechnung herausfordernder Probleme demonstrieren.
Resumo:
Evapotranspiration (ET) is a complex process in the hydrological cycle that influences the quantity of runoff and thus the irrigation water requirements. Numerous methods have been developed to estimate potential evapotranspiration (PET). Unfortunately, most of the reliable PET methods are parameter rich models and therefore, not feasible for application in data scarce regions. On the other hand, accuracy and reliability of simple PET models vary widely according to regional climate conditions. The objective of the present study was to evaluate the performance of three temperature-based and three radiation-based simple ET methods in estimating historical ET and projecting future ET at Muda Irrigation Scheme at Kedah, Malaysia. The performance was measured by comparing those methods with the parameter intensive Penman-Monteith Method. It was found that radiation based methods gave better performance compared to temperature-based methods in estimation of ET in the study area. Future ET simulated from projected climate data obtained through statistical downscaling technique also showed that radiation-based methods can project closer ET values to that projected by Penman-Monteith Method. It is expected that the study will guide in selecting suitable methods for estimating and projecting ET in accordance to availability of meteorological data.
Resumo:
Presentation given at the Al-Azhar Engineering First Conference, AEC’89, Dec. 9-12 1989, Cairo, Egypt. The paper presented at AEC'89 suggests an infinite storage scheme divided into one volume which is online and an arbitrary number of off-line volumes arranged into a linear chain which hold records which haven't been accessed recently. The online volume holds the records in sorted order (e.g. as a B-tree) and contains shortest prefixes of keys of records already pushed offline. As new records enter, older ones are retired to the first volume which is going offline next. Statistical arguments are given for the rate at which an off-line volume needs to be fetched to reload a record which had been retired before. The rate depends on the distribution of access probabilities as a function of time. Applications are medical records, production records or other data which need to be kept for a long time for legal reasons.
Resumo:
The Scheme86 and the HP Precision Architectures represent different trends in computer processor design. The former uses wide micro-instructions, parallel hardware, and a low latency memory interface. The latter encourages pipelined implementation and visible interlocks. To compare the merits of these approaches, algorithms frequently encountered in numerical and symbolic computation were hand-coded for each architecture. Timings were done in simulators and the results were evaluated to determine the speed of each design. Based on these measurements, conclusions were drawn as to which aspects of each architecture are suitable for a high- performance computer.
Resumo:
This paper presents a DHT-based grid resource indexing and discovery (DGRID) approach. With DGRID, resource-information data is stored on its own administrative domain and each domain, represented by an index server, is virtualized to several nodes (virtual servers) subjected to the number of resource types it has. Then, all nodes are arranged as a structured overlay network or distributed hash table (DHT). Comparing to existing grid resource indexing and discovery schemes, the benefits of DGRID include improving the security of domains, increasing the availability of data, and eliminating stale data.
Resumo:
This paper presents a new charging scheme for cost distribution along a point-to-multipoint connection when destination nodes are responsible for the cost. The scheme focus on QoS considerations and a complete range of choices is presented. These choices go from a safe scheme for the network operator to a fair scheme to the customer. The in-between cases are also covered. Specific and general problems, like the incidence of users disconnecting dynamically is also discussed. The aim of this scheme is to encourage the users to disperse the resource demand instead of having a large number of direct connections to the source of the data, which would result in a higher than necessary bandwidth use from the source. This would benefit the overall performance of the network. The implementation of this task must balance between the necessity to offer a competitive service and the risk of not recovering such service cost for the network operator. Throughout this paper reference to multicast charging is made without making any reference to any specific category of service. The proposed scheme is also evaluated with the criteria set proposed in the European ATM charging project CANCAN
Resumo:
This paper presents a hybrid behavior-based scheme using reinforcement learning for high-level control of autonomous underwater vehicles (AUVs). Two main features of the presented approach are hybrid behavior coordination and semi on-line neural-Q_learning (SONQL). Hybrid behavior coordination takes advantages of robustness and modularity in the competitive approach as well as efficient trajectories in the cooperative approach. SONQL, a new continuous approach of the Q_learning algorithm with a multilayer neural network is used to learn behavior state/action mapping online. Experimental results show the feasibility of the presented approach for AUVs