949 resultados para Processing Time
Resumo:
This study looks at how increased memory utilisation affects throughput and energy consumption in scientific computing, especially in high-energy physics. Our aim is to minimise energy consumed by a set of jobs without increasing the processing time. The earlier tests indicated that, especially in data analysis, throughput can increase over 100% and energy consumption decrease 50% by processing multiple jobs in parallel per CPU core. Since jobs are heterogeneous, it is not possible to find an optimum value for the number of parallel jobs. A better solution is based on memory utilisation, but finding an optimum memory threshold is not straightforward. Therefore, a fuzzy logic-based algorithm was developed that can dynamically adapt the memory threshold based on the overall load. In this way, it is possible to keep memory consumption stable with different workloads while achieving significantly higher throughput and energy-efficiency than using a traditional fixed number of jobs or fixed memory threshold approaches.
Resumo:
Plasma catecholamines provide a reliable biomarker of sympathetic activity. The low circulating concentrations of catecholamines and analytical interferences require tedious sample preparation and long chromatographic runs to ensure their accurate quantification by HPLC with electrochemical detection. Published or commercially available methods relying on solid phase extraction technology lack sensitivity or require derivatization of catecholamine by hazardous reagents prior to tandem mass spectrometry (MS) analysis. Here, we manufactured a novel 96-well microplate device specifically designed to extract plasma catecholamines prior to their quantification by a new and highly sensitive ultraperformance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) method. Processing time, which included sample purification on activated aluminum oxide and elution, is less than 1 h per 96-well microplate. The UPLC-MS/MS analysis run time is 2.0 min per sample. This UPLC-MS/MS method does not require a derivatization step, reduces the turnaround time by 10-fold compared to conventional methods used for routine application, and allows catecholamine quantification in reduced plasma sample volumes (50-250 μL, e.g., from children and mice).
Resumo:
PURPOSE: Statistical shape and appearance models play an important role in reducing the segmentation processing time of a vertebra and in improving results for 3D model development. Here, we describe the different steps in generating a statistical shape model (SSM) of the second cervical vertebra (C2) and provide the shape model for general use by the scientific community. The main difficulties in its construction are the morphological complexity of the C2 and its variability in the population. METHODS: The input dataset is composed of manually segmented anonymized patient computerized tomography (CT) scans. The alignment of the different datasets is done with the procrustes alignment on surface models, and then, the registration is cast as a model-fitting problem using a Gaussian process. A principal component analysis (PCA)-based model is generated which includes the variability of the C2. RESULTS: The SSM was generated using 92 CT scans. The resulting SSM was evaluated for specificity, compactness and generalization ability. The SSM of the C2 is freely available to the scientific community in Slicer (an open source software for image analysis and scientific visualization) with a module created to visualize the SSM using Statismo, a framework for statistical shape modeling. CONCLUSION: The SSM of the vertebra allows the shape variability of the C2 to be represented. Moreover, the SSM will enable semi-automatic segmentation and 3D model generation of the vertebra, which would greatly benefit surgery planning.
Resumo:
«Quel est l'âge de cette trace digitale?» Cette question est relativement souvent soulevée au tribunal ou lors d'investigations, lorsque la personne suspectée admet avoir laissé ses empreintes digitales sur une scène de crime mais prétend l'avoir fait à un autre moment que celui du crime et pour une raison innocente. Toutefois, aucune réponse ne peut actuellement être donnée à cette question, puisqu'aucune méthodologie n'est pour l'heure validée et acceptée par l'ensemble de la communauté forensique. Néanmoins, l'inventaire de cas américains conduit dans cette recherche a montré que les experts fournissent tout de même des témoignages au tribunal concernant l'âge de traces digitales, même si ceux-‐ci sont majoritairement basés sur des paramètres subjectifs et mal documentés. Il a été relativement aisé d'accéder à des cas américains détaillés, ce qui explique le choix de l'exemple. Toutefois, la problématique de la datation des traces digitales est rencontrée dans le monde entier, et le manque de consensus actuel dans les réponses données souligne la nécessité d'effectuer des études sur le sujet. Le but de la présente recherche est donc d'évaluer la possibilité de développer une méthode de datation objective des traces digitales. Comme les questions entourant la mise au point d'une telle procédure ne sont pas nouvelles, différentes tentatives ont déjà été décrites dans la littérature. Cette recherche les a étudiées de manière critique, et souligne que la plupart des méthodologies reportées souffrent de limitations prévenant leur utilisation pratique. Néanmoins, certaines approches basées sur l'évolution dans le temps de composés intrinsèques aux résidus papillaires se sont montrées prometteuses. Ainsi, un recensement détaillé de la littérature a été conduit afin d'identifier les composés présents dans les traces digitales et les techniques analytiques capables de les détecter. Le choix a été fait de se concentrer sur les composés sébacés détectés par chromatographie gazeuse couplée à la spectrométrie de masse (GC/MS) ou par spectroscopie infrarouge à transformée de Fourier. Des analyses GC/MS ont été menées afin de caractériser la variabilité initiale de lipides cibles au sein des traces digitales d'un même donneur (intra-‐variabilité) et entre les traces digitales de donneurs différents (inter-‐variabilité). Ainsi, plusieurs molécules ont été identifiées et quantifiées pour la première fois dans les résidus papillaires. De plus, il a été déterminé que l'intra-‐variabilité des résidus était significativement plus basse que l'inter-‐variabilité, mais que ces deux types de variabilité pouvaient être réduits en utilisant différents pré-‐ traitements statistiques s'inspirant du domaine du profilage de produits stupéfiants. Il a également été possible de proposer un modèle objectif de classification des donneurs permettant de les regrouper dans deux classes principales en se basant sur la composition initiale de leurs traces digitales. Ces classes correspondent à ce qui est actuellement appelé de manière relativement subjective des « bons » ou « mauvais » donneurs. Le potentiel d'un tel modèle est élevé dans le domaine de la recherche en traces digitales, puisqu'il permet de sélectionner des donneurs représentatifs selon les composés d'intérêt. En utilisant la GC/MS et la FTIR, une étude détaillée a été conduite sur les effets de différents facteurs d'influence sur la composition initiale et le vieillissement de molécules lipidiques au sein des traces digitales. Il a ainsi été déterminé que des modèles univariés et multivariés pouvaient être construits pour décrire le vieillissement des composés cibles (transformés en paramètres de vieillissement par pré-‐traitement), mais que certains facteurs d'influence affectaient ces modèles plus sérieusement que d'autres. En effet, le donneur, le substrat et l'application de techniques de révélation semblent empêcher la construction de modèles reproductibles. Les autres facteurs testés (moment de déposition, pression, température et illumination) influencent également les résidus et leur vieillissement, mais des modèles combinant différentes valeurs de ces facteurs ont tout de même prouvé leur robustesse dans des situations bien définies. De plus, des traces digitales-‐tests ont été analysées par GC/MS afin d'être datées en utilisant certains des modèles construits. Il s'est avéré que des estimations correctes étaient obtenues pour plus de 60 % des traces-‐tests datées, et jusqu'à 100% lorsque les conditions de stockage étaient connues. Ces résultats sont intéressants mais il est impératif de conduire des recherches supplémentaires afin d'évaluer les possibilités d'application de ces modèles dans des cas réels. Dans une perspective plus fondamentale, une étude pilote a également été effectuée sur l'utilisation de la spectroscopie infrarouge combinée à l'imagerie chimique (FTIR-‐CI) afin d'obtenir des informations quant à la composition et au vieillissement des traces digitales. Plus précisément, la capacité de cette technique à mettre en évidence le vieillissement et l'effet de certains facteurs d'influence sur de larges zones de traces digitales a été investiguée. Cette information a ensuite été comparée avec celle obtenue par les spectres FTIR simples. Il en a ainsi résulté que la FTIR-‐CI était un outil puissant, mais que son utilisation dans l'étude des résidus papillaires à des buts forensiques avait des limites. En effet, dans cette recherche, cette technique n'a pas permis d'obtenir des informations supplémentaires par rapport aux spectres FTIR traditionnels et a également montré des désavantages majeurs, à savoir de longs temps d'analyse et de traitement, particulièrement lorsque de larges zones de traces digitales doivent être couvertes. Finalement, les résultats obtenus dans ce travail ont permis la proposition et discussion d'une approche pragmatique afin d'aborder les questions de datation des traces digitales. Cette approche permet ainsi d'identifier quel type d'information le scientifique serait capable d'apporter aux enquêteurs et/ou au tribunal à l'heure actuelle. De plus, le canevas proposé décrit également les différentes étapes itératives de développement qui devraient être suivies par la recherche afin de parvenir à la validation d'une méthodologie de datation des traces digitales objective, dont les capacités et limites sont connues et documentées. -- "How old is this fingermark?" This question is relatively often raised in trials when suspects admit that they have left their fingermarks on a crime scene but allege that the contact occurred at a time different to that of the crime and for legitimate reasons. However, no answer can be given to this question so far, because no fingermark dating methodology has been validated and accepted by the whole forensic community. Nevertheless, the review of past American cases highlighted that experts actually gave/give testimonies in courts about the age of fingermarks, even if mostly based on subjective and badly documented parameters. It was relatively easy to access fully described American cases, thus explaining the origin of the given examples. However, fingermark dating issues are encountered worldwide, and the lack of consensus among the given answers highlights the necessity to conduct research on the subject. The present work thus aims at studying the possibility to develop an objective fingermark dating method. As the questions surrounding the development of dating procedures are not new, different attempts were already described in the literature. This research proposes a critical review of these attempts and highlights that most of the reported methodologies still suffer from limitations preventing their use in actual practice. Nevertheless, some approaches based on the evolution of intrinsic compounds detected in fingermark residue over time appear to be promising. Thus, an exhaustive review of the literature was conducted in order to identify the compounds available in the fingermark residue and the analytical techniques capable of analysing them. It was chosen to concentrate on sebaceous compounds analysed using gas chromatography coupled with mass spectrometry (GC/MS) or Fourier transform infrared spectroscopy (FTIR). GC/MS analyses were conducted in order to characterize the initial variability of target lipids among fresh fingermarks of the same donor (intra-‐variability) and between fingermarks of different donors (inter-‐variability). As a result, many molecules were identified and quantified for the first time in fingermark residue. Furthermore, it was determined that the intra-‐variability of the fingermark residue was significantly lower than the inter-‐variability, but that it was possible to reduce both kind of variability using different statistical pre-‐ treatments inspired from the drug profiling area. It was also possible to propose an objective donor classification model allowing the grouping of donors in two main classes based on their initial lipid composition. These classes correspond to what is relatively subjectively called "good" or "bad" donors. The potential of such a model is high for the fingermark research field, as it allows the selection of representative donors based on compounds of interest. Using GC/MS and FTIR, an in-‐depth study of the effects of different influence factors on the initial composition and aging of target lipid molecules found in fingermark residue was conducted. It was determined that univariate and multivariate models could be build to describe the aging of target compounds (transformed in aging parameters through pre-‐ processing techniques), but that some influence factors were affecting these models more than others. In fact, the donor, the substrate and the application of enhancement techniques seemed to hinder the construction of reproducible models. The other tested factors (deposition moment, pressure, temperature and illumination) also affected the residue and their aging, but models combining different values of these factors still proved to be robust. Furthermore, test-‐fingermarks were analysed with GC/MS in order to be dated using some of the generated models. It turned out that correct estimations were obtained for 60% of the dated test-‐fingermarks and until 100% when the storage conditions were known. These results are interesting but further research should be conducted to evaluate if these models could be used in uncontrolled casework conditions. In a more fundamental perspective, a pilot study was also conducted on the use of infrared spectroscopy combined with chemical imaging in order to gain information about the fingermark composition and aging. More precisely, its ability to highlight influence factors and aging effects over large areas of fingermarks was investigated. This information was then compared with that given by individual FTIR spectra. It was concluded that while FTIR-‐ CI is a powerful tool, its use to study natural fingermark residue for forensic purposes has to be carefully considered. In fact, in this study, this technique does not yield more information on residue distribution than traditional FTIR spectra and also suffers from major drawbacks, such as long analysis and processing time, particularly when large fingermark areas need to be covered. Finally, the results obtained in this research allowed the proposition and discussion of a formal and pragmatic framework to approach the fingermark dating questions. It allows identifying which type of information the scientist would be able to bring so far to investigators and/or Justice. Furthermore, this proposed framework also describes the different iterative development steps that the research should follow in order to achieve the validation of an objective fingermark dating methodology, whose capacities and limits are well known and properly documented.
Resumo:
JXTA is a peer-to-peer (P2P) middleware whichhas undergone successive iterations through its 10 years of history, slowly incorporating a security baseline that may cater to different applications and services. However, in order to appeal to a broader set of secure scenarios, it would be interesting to take into consideration more advanced capabilities, such as anonymity.There are several proposals on anonymous protocols that can be applied in the context of a P2P network, but it is necessary to be able to choose the right one given each application¿s needs. In this paper, we provide an experimental evaluation of two relevant protocols, each one belonging to a different category of approaches to anonymity: unimessage and split message. Webase our analysis on two scenarios, with stable and non-stable peers, and three metrics: round trip-time (RTT), node processing time and reliability.
Resumo:
Simulation has traditionally been used for analyzing the behavior of complex real world problems. Even though only some features of the problems are considered, simulation time tends to become quite high even for common simulation problems. Parallel and distributed simulation is a viable technique for accelerating the simulations. The success of parallel simulation depends heavily on the combination of the simulation application, algorithm and message population in the simulation is sufficient, no additional delay is caused by this environment. In this thesis a conservative, parallel simulation algorithm is applied to the simulation of a cellular network application in a distributed workstation environment. This thesis presents a distributed simulation environment, Diworse, which is based on the use of networked workstations. The distributed environment is considered especially hard for conservative simulation algorithms due to the high cost of communication. In this thesis, however, the distributed environment is shown to be a viable alternative if the amount of communication is kept reasonable. Novel ideas of multiple message simulation and channel reduction enable efficient use of this environment for the simulation of a cellular network application. The distribution of the simulation is based on a modification of the well known Chandy-Misra deadlock avoidance algorithm with null messages. The basic Chandy Misra algorithm is modified by using the null message cancellation and multiple message simulation techniques. The modifications reduce the amount of null messages and the time required for their execution, thus reducing the simulation time required. The null message cancellation technique reduces the processing time of null messages as the arriving null message cancels other non processed null messages. The multiple message simulation forms groups of messages as it simulates several messages before it releases the new created messages. If the message population in the simulation is suffiecient, no additional delay is caused by this operation A new technique for considering the simulation application is also presented. The performance is improved by establishing a neighborhood for the simulation elements. The neighborhood concept is based on a channel reduction technique, where the properties of the application exclusively determine which connections are necessary when a certain accuracy for simulation results is required. Distributed simulation is also analyzed in order to find out the effect of the different elements in the implemented simulation environment. This analysis is performed by using critical path analysis. Critical path analysis allows determination of a lower bound for the simulation time. In this thesis critical times are computed for sequential and parallel traces. The analysis based on sequential traces reveals the parallel properties of the application whereas the analysis based on parallel traces reveals the properties of the environment and the distribution.
Resumo:
The amazing world of micro total analysis systems has provided a true revolution in analytical chemistry in recent years. The application of the microfluidic devices for chemical and biochemical processing has attracted considerable interest due to its advantages such as reduced sample and reagent consumption, processing time, energy, waste, cost, and portability. The aim of the present report is to disseminate the state of the art related to the miniaturization science in Analytical Chemistry. Historical progress, microfabrication technologies, required instrumentation and applications of the mTAS are presented in the current article, with special attention to the Brazilian contributions.
Resumo:
Fluent health information flow is critical for clinical decision-making. However, a considerable part of this information is free-form text and inabilities to utilize it create risks to patient safety and cost-effective hospital administration. Methods for automated processing of clinical text are emerging. The aim in this doctoral dissertation is to study machine learning and clinical text in order to support health information flow.First, by analyzing the content of authentic patient records, the aim is to specify clinical needs in order to guide the development of machine learning applications.The contributions are a model of the ideal information flow,a model of the problems and challenges in reality, and a road map for the technology development. Second, by developing applications for practical cases,the aim is to concretize ways to support health information flow. Altogether five machine learning applications for three practical cases are described: The first two applications are binary classification and regression related to the practical case of topic labeling and relevance ranking.The third and fourth application are supervised and unsupervised multi-class classification for the practical case of topic segmentation and labeling.These four applications are tested with Finnish intensive care patient records.The fifth application is multi-label classification for the practical task of diagnosis coding. It is tested with English radiology reports.The performance of all these applications is promising. Third, the aim is to study how the quality of machine learning applications can be reliably evaluated.The associations between performance evaluation measures and methods are addressed,and a new hold-out method is introduced.This method contributes not only to processing time but also to the evaluation diversity and quality. The main conclusion is that developing machine learning applications for text requires interdisciplinary, international collaboration. Practical cases are very different, and hence the development must begin from genuine user needs and domain expertise. The technological expertise must cover linguistics,machine learning, and information systems. Finally, the methods must be evaluated both statistically and through authentic user-feedback.
Resumo:
The objective of this work was to use the high-pressure homogenizer (HPH) to prepare stable oil/water nanoemulsions presenting narrow particle size distribution. The dispersions were prepared using nonionic surfactants based on ethoxylated ether. The size and distribution of the droplets formed, along with their stability, were determined in a Zetasizer Nano ZS particle size analyzer. The stability and the droplet size distribution in these systems do not present the significant differences with the increase of the processing pressure in the HPH). The processing time can promote the biggest dispersion in the size of particles, thus reducing its stability.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
AbstractAnthocyanins are present in high concentrations in Chinese bayberry, Myrica rubra Sieb. & Zucc. Herein, a microwave-assisted extraction was used to extract the anthocyanins from Chinese bayberry. The HPLC chromatogram of the extracts showed that the anthocyanin components were slightly hydrolysed during the extraction process. Further experiments confirmed that microwave irradiation slightly hydrolysed cyanidin-3-O-glucoside to cyanidin, but did not significantly influence the antioxidant activity of the extracts. Optimized extraction conditions for total anthocyanin content were a solid-to-liquid ratio, extraction temperature, and extraction time of 1:50, 80 °C, and 15 min, respectively. Under these conditions, the anthocyanin content was 2.95 ± 0.08 mg·g−1, and the antioxidant activity yield was 279.96 ± 0.1 μmol.·g−1 Trolox equivalent on a dry weight basis. These results indicated that microwave-assisted extraction was a highly efficient extraction method with reduced processing time. However, under some extraction conditions it could damage the anthocyanins. These results provide an important guide for the application of microwave extraction.
Resumo:
The aim of this thesis research work focused on the carbonate precipitation of magnesium using magnesium hydroxide Mg(OH)2 and carbon dioxide (CO2) gas at ambient temperature and pressure. The rate of dissolution of Mg(OH)2 and precipitation kinetics were investigated under different operating conditions. The conductivity and pH of the solution were inline monitored by a Consort meter and the solid samples gotten from the precipitation reaction were analysed by a laser diffraction analyzer Malvern Mastersizer to obtain particle size distributions (PSD) of crystal samples. Also the Mg2+ concentration profiles were determined from the liquid phase of the precipitate by ion chromatography (IC) analysis. Crystal morphology of the obtained precipitates were also investigated and discussed in this work. For the carbonation reaction of magnesium hydroxide in the present work, it was found that magnesium carbonate trihydrate (nesquehonite) was the main product and its formation occurred at a pH of around 7-8. The stirrer speed has a significant effect on the dissolution rate of Mg(OH)2. The highest obtained Mg2+ concentration level was 0.424 mol L-l for the 470 rpm and 0.387 mol L-1 for the 560 rpm which corresponded to the processing time of 45 mins and 40 mins respectively. The particle size distribution shows that the average particle size keeps increasing during the reaction as the CO2 is been fed to the system. The carbonation process is kinetically favored and simple as nesquehonite formation occurs in a very short time. It is a thermodynamically and chemically stable solid product, which allows for a long-term storage of CO2. Since the carbonation reaction is a complex system which includes dissolution of magnesium hydroxide particles, absorption of CO2, chemical reaction and crystallization, the dissolution of magnesium hydroxide was studied in hydrochloric acid (HCl) solvent with and without nitrogen (N2) inert gas. It was found on the dissolution part that the impeller speed had effect on the dissolution rate. The higher the impeller speed the higher the pH of the solution, although for the highest speed of 650rpm it was not the case. Therefore, it was concluded that the optimum speed of the stirrer was 560rpm. The influence of inert gas N2 on the dissolution rate of Mg(OH)2 particles could be seen based on measured pH, electric conductivity and Mg2+ concentration curves.
Resumo:
In the present work, Indigenous polymer coated Tin Free Steel cans were analyzed fortheir suitability for thermal processing and storage of fish and fish products following standard methods. The raw materials used for the development of ready to eat thermally processed fish products were found to be of fresh condition. The values for various biochemical and microbiological parameters of the raw materials were well within the limits. Based on the analysis of commercial sterility, instrumental colour, texture, WB-shear force and sensory parameters, squid masala processed to F0 value of 8 min with a total process time of 38.5 min and cook value of 92 min was chosen as the optimum for squid masala in tin free steel cans while shrimp curry processed to F0 7 min with total process time of 44.0 min and cook value of 91.1 min was found to be ideal and was selected for storage study. Squid masala and shrimp curry thermally processed in indigenous polymer coated TFS cans were found to be acceptable even after one year of storage at room temperaturebased on the analysis of various sensory and biochemical parameters. Analysis of the Commission Internationale d’ Eclirage L*, a* and b* color values showed that the duration of exposure to heat treatment influenced the color parameters: the lightness (L*) and yellowness (b*)decreased, and the redness (a*) significantly increased with the increase in processing time or reduction in processing temperature.Instrumental analysis of texture showed that hardness-1 & 2 decreased with reduction in retort temperature while cohesiveness value did not show any appreciable change with decrease in temperature of processing. Other texture profile parameters like gumminess, springiness and chewiness decreased significantly with increase of processing time. W-B shear force values of mackerel meat processed at 130 °C were significantly higher than those processed at 121.1 and 115 °C. HTST processing of mackerel in brine helped in reducing the process time and improving the quality.The study also indicated that indigenous polymer coated TFS cans with easy openends can be a viable alternative to the conventional tin and aluminium cans. The industry can utilize these cans for processing ready to eat fish and shell fish products for both domestic and export markets. This will help in reviving the canning industry in India.
Resumo:
In this work. Sub-micrometre thick CulnSe2 films were prepared using different
techniques viz, selenization through chemically deposited Selenium and Sequential
Elemental Evaporation. These methods
are simpler than co-evaporation technique, which is known to be the most suitable
one for CulnSe2 preparation. The films were optimized by varying the composition
over a wide range to find optimum properties for device fabrication. Typical absorber
layer thickness of today's solar cell ranges from 2-3m. Thinning of the absorber
layer is one of the challenges to reduce the processing time and material usage,
particularly of Indium. Here we made an attempt to fabricate solar cell with absorber
layer of thickness
Resumo:
In this thesis we have developed a few inventory models in which items are served to the customers after a processing time. This leads to a queue of demand even when items are available. In chapter two we have discussed a problem involving search of orbital customers for providing inventory. Retrial of orbital customers was also considered in that chapter; in chapter 5 also we discussed retrial inventory model which is sans orbital search of customers. In the remaining chapters (3, 4 and 6) we did not consider retrial of customers, rather we assumed the waiting room capacity of the system to be arbitrarily large. Though the models in chapters 3 and 4 differ only in that in the former we consider positive lead time for replenishment of inventory and in the latter the same is assumed to be negligible, we arrived at sharper results in chapter 4. In chapter 6 we considered a production inventory model with production time distribution for a single item and that of service time of a customer following distinct Erlang distributions. We also introduced protection of production and service stages and investigated the optimal values of the number of stages to be protected.