994 resultados para Modeling breakthrough curves


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Communication is the process of transmitting data across channel. Whenever data is transmitted across a channel, errors are likely to occur. Coding theory is a stream of science that deals with finding efficient ways to encode and decode data, so that any likely errors can be detected and corrected. There are many methods to achieve coding and decoding. One among them is Algebraic Geometric Codes that can be constructed from curves. Cryptography is the science ol‘ security of transmitting messages from a sender to a receiver. The objective is to encrypt message in such a way that an eavesdropper would not be able to read it. A eryptosystem is a set of algorithms for encrypting and decrypting for the purpose of the process of encryption and decryption. Public key eryptosystem such as RSA and DSS are traditionally being prel‘en‘ec| for the purpose of secure communication through the channel. llowever Elliptic Curve eryptosystem have become a viable altemative since they provide greater security and also because of their usage of key of smaller length compared to other existing crypto systems. Elliptic curve cryptography is based on group of points on an elliptic curve over a finite field. This thesis deals with Algebraic Geometric codes and their relation to Cryptography using elliptic curves. Here Goppa codes are used and the curves used are elliptic curve over a finite field. We are relating Algebraic Geometric code to Cryptography by developing a cryptographic algorithm, which includes the process of encryption and decryption of messages. We are making use of fundamental properties of Elliptic curve cryptography for generating the algorithm and is used here to relate both.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

there has been much research on analyzing various forms of competing risks data. Nevertheless, there are several occasions in survival studies, where the existing models and methodologies are inadequate for the analysis competing risks data. ldentifiabilty problem and various types of and censoring induce more complications in the analysis of competing risks data than in classical survival analysis. Parametric models are not adequate for the analysis of competing risks data since the assumptions about the underlying lifetime distributions may not hold well. Motivated by this, in the present study. we develop some new inference procedures, which are completely distribution free for the analysis of competing risks data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses the complexities involved in managing and monitoring the delivery of IT services in a multiparty outsourcing environment. The complexities identified are grouped into four categories and are tabulated. A discussion on an attempt to model a multiparty outsourcing scenario using UML is also presented and explained using an illustration. Such a model when supplemented by a performance evaluation tool can enable an organization to manage the provision of IT services in a multiparty outsourcing environment more effectively

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Safety critical software failure can have a high price. Such software should be free of errors before it is put into operation. Application of formal methods in the Software Development Life Cycle helps to ensure that the software for safety critical missions are ultra reliable. PVS theorem prover, a formal method tool, can be used for the formal verification of software in ADA Language for Flight Software Application (ALFA.). This paper describes the modeling of ALFA programs for PVS theorem prover. An ALFA2PVS translator is developed which automatically converts the software in ALFA to PVS specification. By this approach the software can be verified formally with respect to underflow/overflow errors and divide by zero conditions without the actual execution of the code.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A series of novel naphthyridine derivatives 3 and 4 was prepared from substituted pyridine 2 and ketones using ZnCl2 as catalyst under microwave irradiation conditions. All the compounds were evaluated for AChE inhibitory activity and promising compounds 3d, 3e, 4b, and 4g was identified. Representative compounds 3d and 3e were found to show insignificant THLE-2 liver cell viability/toxicity. The binding mode between X-ray crystal structure of human AChE and compounds was studied using molecular docking method and fitness scores were found to be in good correlation with the activity data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Safety critical software failure can have a high price. Such software should be free of errors before it is put into operation. Application of formal methods in the Software Development Life Cycle helps to ensure that the software for safety critical missions are ultra reliable. PVS theorem prover, a formal method tool, can be used for the formal verification of software in ADA Language for Flight Software Application (ALFA.). This paper describes the modeling of ALFA programs for PVS theorem prover. An ALFA2PVS translator is developed which automatically converts the software in ALFA to PVS specification. By this approach the software can be verified formally with respect to underflow/overflow errors and divide by zero conditions without the actual execution of the code

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Metal matrix composites (MMC) having aluminium (Al) in the matrix phase and silicon carbide particles (SiCp) in reinforcement phase, ie Al‐SiCp type MMC, have gained popularity in the re‐cent past. In this competitive age, manufacturing industries strive to produce superior quality products at reasonable price. This is possible by achieving higher productivity while performing machining at optimum combinations of process variables. The low weight and high strength MMC are found suitable for variety of components

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lower partial moments plays an important role in the analysis of risks and in income/poverty studies. In the present paper, we further investigate its importance in stochastic modeling and prove some characterization theorems arising out of it. We also identify its relationships with other important applied models such as weighted and equilibrium models. Finally, some applications of lower partial moments in poverty studies are also examined

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents an efficient method for volume rendering of glioma tumors from segmented 2D MRI Datasets with user interactive control, by replacing manual segmentation required in the state of art methods. The most common primary brain tumors are gliomas, evolving from the cerebral supportive cells. For clinical follow-up, the evaluation of the pre- operative tumor volume is essential. Tumor portions were automatically segmented from 2D MR images using morphological filtering techniques. These seg- mented tumor slices were propagated and modeled with the software package. The 3D modeled tumor consists of gray level values of the original image with exact tumor boundary. Axial slices of FLAIR and T2 weighted images were used for extracting tumors. Volumetric assessment of tumor volume with manual segmentation of its outlines is a time-consuming proc- ess and is prone to error. These defects are overcome in this method. Authors verified the performance of our method on several sets of MRI scans. The 3D modeling was also done using segmented 2D slices with the help of a medical software package called 3D DOCTOR for verification purposes. The results were validated with the ground truth models by the Radi- ologist.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, a novel fast method for modeling mammograms by deterministic fractal coding approach to detect the presence of microcalcifications, which are early signs of breast cancer, is presented. The modeled mammogram obtained using fractal encoding method is visually similar to the original image containing microcalcifications, and therefore, when it is taken out from the original mammogram, the presence of microcalcifications can be enhanced. The limitation of fractal image modeling is the tremendous time required for encoding. In the present work, instead of searching for a matching domain in the entire domain pool of the image, three methods based on mean and variance, dynamic range of the image blocks, and mass center features are used. This reduced the encoding time by a factor of 3, 89, and 13, respectively, in the three methods with respect to the conventional fractal image coding method with quad tree partitioning. The mammograms obtained from The Mammographic Image Analysis Society database (ground truth available) gave a total detection score of 87.6%, 87.6%, 90.5%, and 87.6%, for the conventional and the proposed three methods, respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

When simulation modeling is used for performance improvement studies of complex systems such as transport terminals, domain specific conceptual modeling constructs could be used by modelers to create structured models. A two stage procedure which includes identification of the problem characteristics/cluster - ‘knowledge acquisition’ and identification of standard models for the problem cluster – ‘model abstraction’ was found to be effective in creating structured models when applied to certain logistic terminal systems. In this paper we discuss some methods and examples related the knowledge acquisition and model abstraction stages for the development of three different types of model categories of terminal systems

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wenn man die Existenz von physikalischen Mechanismen ignoriert, die für die Struktur hydrologischer Zeitreihen verantwortlich sind, kann das zu falschen Schlussfolgerungen bzgl. des Vorhandenseins möglicher Gedächtnis (memory) -Effekte, d.h. von Persistenz, führen. Die hier vorgelegte Doktorarbeit spürt der niedrigfrequenten klimatischen Variabilität innerhalb den hydrologischen Zyklus nach und bietet auf dieser "Reise" neue Einsichten in die Transformation der charakteristischen Eigenschaften von Zeitreihen mit einem Langzeitgedächtnis. Diese Studie vereint statistische Methoden der Zeitreihenanalyse mit empirisch-basierten Modelltechniken, um operative Modelle zu entwickeln, die in der Lage sind (1) die Dynamik des Abflusses zu modellieren, (2) sein zukünftiges Verhalten zu prognostizieren und (3) die Abflusszeitreihen an unbeobachteten Stellen abzuschätzen. Als solches präsentiert die hier vorgelegte Dissertation eine ausführliche Untersuchung zu den Ursachen der niedrigfrequenten Variabilität von hydrologischen Zeitreihen im deutschen Teil des Elbe-Einzugsgebietes, den Folgen dieser Variabilität und den physikalisch basierten Reaktionen von Oberflächen- und Grundwassermodellen auf die niedrigfrequenten Niederschlags-Eingangsganglinien. Die Doktorarbeit gliedert sich wie folgt: In Kapitel 1 wird als Hintergrundinformation das Hurst Phänomen beschrieben und ein kurzer Rückblick auf diesbezügliche Studien gegeben. Das Kapitel 2 diskutiert den Einfluss der Präsenz von niedrigfrequenten periodischen Zeitreihen auf die Zuverlässigkeit verschiedener Hurst-Parameter-Schätztechniken. Kapitel 3 korreliert die niedrigfrequente Niederschlagsvariabilität mit dem Index der Nord-Atlantischen Ozillations (NAO). Kapitel 4-6 sind auf den deutschen Teil des Elbe-Einzugsgebietes fokussiert. So werden in Kapitel 4 die niedrigfrequenten Variabilitäten der unterschiedlichen hydro-meteorologischen Parameter untersucht und es werden Modelle beschrieben, die die Dynamik dieser Niedrigfrequenzen und deren zukünftiges Verhalten simulieren. Kapitel 5 diskutiert die mögliche Anwendung der Ergebnisse für die charakteristische Skalen und die Verfahren der Analyse der zeitlichen Variabilität auf praktische Fragestellungen im Wasserbau sowie auf die zeitliche Bestimmung des Gebiets-Abflusses an unbeobachteten Stellen. Kapitel 6 verfolgt die Spur der Niedrigfrequenzzyklen im Niederschlag durch die einzelnen Komponenten des hydrologischen Zyklus, nämlich dem Direktabfluss, dem Basisabfluss, der Grundwasserströmung und dem Gebiets-Abfluss durch empirische Modellierung. Die Schlussfolgerungen werden im Kapitel 7 präsentiert. In einem Anhang werden technische Einzelheiten zu den verwendeten statistischen Methoden und die entwickelten Software-Tools beschrieben.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Land use is a crucial link between human activities and the natural environment and one of the main driving forces of global environmental change. Large parts of the terrestrial land surface are used for agriculture, forestry, settlements and infrastructure. Given the importance of land use, it is essential to understand the multitude of influential factors and resulting land use patterns. An essential methodology to study and quantify such interactions is provided by the adoption of land-use models. By the application of land-use models, it is possible to analyze the complex structure of linkages and feedbacks and to also determine the relevance of driving forces. Modeling land use and land use changes has a long-term tradition. In particular on the regional scale, a variety of models for different regions and research questions has been created. Modeling capabilities grow with steady advances in computer technology, which on the one hand are driven by increasing computing power on the other hand by new methods in software development, e.g. object- and component-oriented architectures. In this thesis, SITE (Simulation of Terrestrial Environments), a novel framework for integrated regional sland-use modeling, will be introduced and discussed. Particular features of SITE are the notably extended capability to integrate models and the strict separation of application and implementation. These features enable efficient development, test and usage of integrated land-use models. On its system side, SITE provides generic data structures (grid, grid cells, attributes etc.) and takes over the responsibility for their administration. By means of a scripting language (Python) that has been extended by language features specific for land-use modeling, these data structures can be utilized and manipulated by modeling applications. The scripting language interpreter is embedded in SITE. The integration of sub models can be achieved via the scripting language or by usage of a generic interface provided by SITE. Furthermore, functionalities important for land-use modeling like model calibration, model tests and analysis support of simulation results have been integrated into the generic framework. During the implementation of SITE, specific emphasis was laid on expandability, maintainability and usability. Along with the modeling framework a land use model for the analysis of the stability of tropical rainforest margins was developed in the context of the collaborative research project STORMA (SFB 552). In a research area in Central Sulawesi, Indonesia, socio-environmental impacts of land-use changes were examined. SITE was used to simulate land-use dynamics in the historical period of 1981 to 2002. Analogous to that, a scenario that did not consider migration in the population dynamics, was analyzed. For the calculation of crop yields and trace gas emissions, the DAYCENT agro-ecosystem model was integrated. In this case study, it could be shown that land-use changes in the Indonesian research area could mainly be characterized by the expansion of agricultural areas at the expense of natural forest. For this reason, the situation had to be interpreted as unsustainable even though increased agricultural use implied economic improvements and higher farmers' incomes. Due to the importance of model calibration, it was explicitly addressed in the SITE architecture through the introduction of a specific component. The calibration functionality can be used by all SITE applications and enables largely automated model calibration. Calibration in SITE is understood as a process that finds an optimal or at least adequate solution for a set of arbitrarily selectable model parameters with respect to an objective function. In SITE, an objective function typically is a map comparison algorithm capable of comparing a simulation result to a reference map. Several map optimization and map comparison methodologies are available and can be combined. The STORMA land-use model was calibrated using a genetic algorithm for optimization and the figure of merit map comparison measure as objective function. The time period for the calibration ranged from 1981 to 2002. For this period, respective reference land-use maps were compiled. It could be shown, that an efficient automated model calibration with SITE is possible. Nevertheless, the selection of the calibration parameters required detailed knowledge about the underlying land-use model and cannot be automated. In another case study decreases in crop yields and resulting losses in income from coffee cultivation were analyzed and quantified under the assumption of four different deforestation scenarios. For this task, an empirical model, describing the dependence of bee pollination and resulting coffee fruit set from the distance to the closest natural forest, was integrated. Land-use simulations showed, that depending on the magnitude and location of ongoing forest conversion, pollination services are expected to decline continuously. This results in a reduction of coffee yields of up to 18% and a loss of net revenues per hectare of up to 14%. However, the study also showed that ecological and economic values can be preserved if patches of natural vegetation are conservated in the agricultural landscape. -----------------------------------------------------------------------