883 resultados para Simulation Based Method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we propose a data envelopment analysis (DEA) based method for assessing the comparative efficiencies of units operating production processes where input-output levels are inter-temporally dependent. One cause of inter-temporal dependence between input and output levels is capital stock which influences output levels over many production periods. Such units cannot be assessed by traditional or 'static' DEA which assumes input-output correspondences are contemporaneous in the sense that the output levels observed in a time period are the product solely of the input levels observed during that same period. The method developed in the paper overcomes the problem of inter-temporal input-output dependence by using input-output 'paths' mapped out by operating units over time as the basis of assessing them. As an application we compare the results of the dynamic and static model for a set of UK universities. The paper is suggested that dynamic model capture the efficiency better than static model. © 2003 Elsevier Inc. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A recently proposed colour based tracking algorithm has been established to track objects in real circumstances [Zivkovic, Z., Krose, B. 2004. An EM-like algorithm for color-histogram-based object tracking. In: Proc, IEEE Conf. on Computer Vision and Pattern Recognition, pp. 798-803]. To improve the performance of this technique in complex scenes, in this paper we propose a new algorithm for optimally adapting the ellipse outlining the objects of interest. This paper presents a Lagrangian based method to integrate a regularising component into the covariance matrix to be computed. Technically, we intend to reduce the residuals between the estimated probability distribution and the expected one. We argue that, by doing this, the shape of the ellipse can be properly adapted in the tracking stage. Experimental results show that the proposed method has favourable performance in shape adaption and object localisation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Hazard and operability (HAZOP) studies on chemical process plants are very time consuming, and often tedious, tasks. The requirement for HAZOP studies is that a team of experts systematically analyse every conceivable process deviation, identifying possible causes and any hazards that may result. The systematic nature of the task, and the fact that some team members may be unoccupied for much of the time, can lead to tedium, which in turn may lead to serious errors or omissions. An aid to HAZOP are fault trees, which present the system failure logic graphically such that the study team can readily assimilate their findings. Fault trees are also useful to the identification of design weaknesses, and may additionally be used to estimate the likelihood of hazardous events occurring. The one drawback of fault trees is that they are difficult to generate by hand. This is because of the sheer size and complexity of modern process plants. The work in this thesis proposed a computer-based method to aid the development of fault trees for chemical process plants. The aim is to produce concise, structured fault trees that are easy for analysts to understand. Standard plant input-output equation models for major process units are modified such that they include ancillary units and pipework. This results in a reduction in the nodes required to represent a plant. Control loops and protective systems are modelled as operators which act on process variables. This modelling maintains the functionality of loops, making fault tree generation easier and improving the structure of the fault trees produced. A method, called event ordering, is proposed which allows the magnitude of deviations of controlled or measured variables to be defined in terms of the control loops and protective systems with which they are associated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis, I describe studies on fabrication, spectral characteristics and applications of tilted fibre gratings (TFGs) with small, large and 45° tilted structures and novel developments in fabrication of fibre Bragg gratings (FBGs) and long period gratings (LPGs) in normal silica and mid-infrared (mid-IR) glass fibres using near-IR femtosecond laser. One of the major contributions presented in this thesis is the systematic investigation of structures, inscription methods and spectral, polarisation dependent loss (PDL) and thermal characteristics of TFGs with small (<45°), large (>45°) and 45° tilted structures. I have experimentally characterised TFGs, obtaining relationships between the radiation angle, central wavelength of the radiation profile, Bragg resonance and the tilt angle, which are consistent with theoretical simulation based on the mode-coupling theory. Furthermore, thermal responses have been measured for these three types of TFGs, showing the transmission spectra of large and 45° TFGs are insensitive to the temperature change, unlike the normal and small angle tilted FBGs. Based on the distinctive optical properties, TFGs have been developed into interrogation system and sensors, which form the other significant contributions of the work presented in this thesis. The 10°-TFG based 800nm WDM interrogation system can function not just as an in-fibre spectrum analyser but also possess refractive index sensing capability. By utilising the unique polarisation properties, the 81 °-TFG based sensors are capable of sensing the transverse loading and twisting with sensitivities of 2.04pW/(kg/m) and 145.90pW/rad, repectively. The final but the most important contribution from the research work presented in this thesis is the development of novel grating inscription techniques using near-IR femtosecond laser. A number of LPGs and FBGs were successfully fabricated in normal silica and mid-IR glass fibres using point-by-point and phase-mask techniques. LPGs and 1st and 2nd order FBGs have been fabricated in these mid-IR glass fibres showing resonances covering the wavelength range from 1200 to 1700nm with the strengths up to 13dB. In addition, the thermal and strain sensitivities of these gratings have been systematically investigated. All the results from these initial but systematic works will provide useful function characteristics information for future fibre grating based devices and applications in mid-IR range.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many planning and control tools, especially network analysis, have been developed in the last four decades. The majority of them were created in military organization to solve the problem of planning and controlling research and development projects. The original version of the network model (i.e. C.P.M/PERT) was transplanted to the construction industry without the consideration of the special nature and environment of construction projects. It suited the purpose of setting up targets and defining objectives, but it failed in satisfying the requirement of detailed planning and control at the site level. Several analytical and heuristic rules based methods were designed and combined with the structure of C.P.M. to eliminate its deficiencies. None of them provides a complete solution to the problem of resource, time and cost control. VERT was designed to deal with new ventures. It is suitable for project evaluation at the development stage. CYCLONE, on the other hand, is concerned with the design and micro-analysis of the production process. This work introduces an extensive critical review of the available planning techniques and addresses the problem of planning for site operation and control. Based on the outline of the nature of site control, this research developed a simulation based network model which combines part of the logics of both VERT and CYCLONE. Several new nodes were designed to model the availability and flow of resources, the overhead and operating cost and special nodes for evaluating time and cost. A large software package is written to handle the input, the simulation process and the output of the model. This package is designed to be used on any microcomputer using MS-DOS operating system. Data from real life projects were used to demonstrate the capability of the technique. Finally, a set of conclusions are drawn regarding the features and limitations of the proposed model, and recommendations for future work are outlined at the end of this thesis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Oral drug delivery is considered the most popular route of delivery because of the ease of administration, availability of a wide range of dosage forms and the large surface area for drug absorption via the intestinal membrane. However, besides the unfavourable biopharmaceutical properties of the therapeutic agents, efflux transporters such as Pglycoprotein (P-gp) and multiple resistance proteins (MRP) decrease the overall drug uptake by extruding the drug from the cells. Although, prodrugs have been investigated to improve drug partitioning by masking the polar groups covalently with pre-moieties promoting increased uptake, they present significant challenges including reduced solubility and increased toxicity. The current work investigates the use of amino acids as ion-pairs for three model drugs: indomethacin (weak acid), trimethoprim (weak base) and ciprofloxacin (zwitter ion) in an attempt to improve both solubility and uptake. Solubility was studied by salt formation while creating new routes for uptake across the membranes via amino acids transporter proteins or dipeptidyl transporters was the rationale to enhance absorption. New salts were prepared for the model drugs and the oppositely charged amino acids by freeze drying and they were characterised using FTIR, 1HNMR, DSC, SEM, pH solubility profile, solubility and dissolution. Permeability profiles were assessed using an in vitro cell based method; Caco-2 cells and the genetic changes occurring across the transporter genes and various pathways involved in the cellular activities were studied using DNA microarrays. Solubility data showed a significant increase in drug solubility upon preparing the new salts with the oppositely charged counter ions (ciprofloxacin glutamate salt exhibiting 2.9x103 fold enhancement when compared to the free drug). Moreover, permeability studies showed a 3 fold increase in trimethoprim and indomethacin permeabilities upon ion-pairing with amino acids and more than 10 fold when the zwitter ionic drug was paired with glutamic acid. Microarray data revealed that trimethoprim was absorbed actively via OCTN1 transporters while MRP7 is the main transporter gene that mediates its efflux. The absorption of trimethoprim from trimethoprim glutamic acid ion-paired formulations was affected by the ratio of glutamic acid in the formulation which was inversely proportional to the degree of expression of OCTN1. Interestingly, ciprofloxacin glutamic acid ion-pairs were found to decrease the up-regulation of ciprofloxacin efflux proteins (P-gp and MRP4) and over-express two solute carrier transporters; (PEPT2 and SLCO1A2) suggesting that a high aqueous binding constant (K11aq) enables the ion-paired formulations to be absorbed as one entity. In conclusion, formation of ion-pairs with amino acids can influence in a positive way solubility, transfer and gene expression effects of drugs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: (1) To devise a model-based method for estimating the probabilities of binocular fusion, interocular suppression and diplopia from psychophysical judgements, (2) To map out the way fusion, suppression and diplopia vary with binocular disparity and blur of single edges shown to each eye, (3) To compare the binocular interactions found for edges of the same vs opposite contrast polarity. Methods: Test images were single, horizontal, Gaussian-blurred edges, with blur B = 1-32 min arc, and vertical disparity 0-8.B, shown for 200 ms. In the main experiment, observers reported whether they saw one central edge, one offset edge, or two edges. We argue that the relation between these three response categories and the three perceptual states (fusion, suppression, diplopia) is indirect and likely to be distorted by positional noise and criterion effects, and so we developed a descriptive, probabilistic model to estimate both the perceptual states and the noise/criterion parameters from the data. Results: (1) Using simulated data, we validated the model-based method by showing that it recovered fairly accurately the disparity ranges for fusion and suppression, (2) The disparity range for fusion (Panum's limit) increased greatly with blur, in line with previous studies. The disparity range for suppression was similar to the fusion limit at large blurs, but two or three times the fusion limit at small blurs. This meant that diplopia was much more prevalent at larger blurs, (3) Diplopia was much more frequent when the two edges had opposite contrast polarity. A formal comparison of models indicated that fusion occurs for same, but not opposite, polarities. Probability of suppression was greater for unequal contrasts, and it was always the lower-contrast edge that was suppressed. Conclusions: Our model-based data analysis offers a useful tool for probing binocular fusion and suppression psychophysically. The disparity range for fusion increased with edge blur but fell short of complete scale-invariance. The disparity range for suppression also increased with blur but was not close to scale-invariance. Single vision occurs through fusion, but also beyond the fusion range, through suppression. Thus suppression can serve as a mechanism for extending single vision to larger disparities, but mainly for sharper edges where the fusion range is small (5-10 min arc). For large blurs the fusion range is so much larger that no such extension may be needed. © 2014 The College of Optometrists.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A statistics-based method using genetic algorithms for predicting discrete sequences is presented. The prediction of the next value is based upon a fixed number of previous values and the statistics offered by the training data. According to the statistics, in similar past cases different values occurred next. If these values are considered with the appropriate weights, the forecast is successful. Weights are generated by genetic algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent evidence has suggested cerebellar anomalies in developmental dyslexia. Therefore, we investigated cerebellar morphology in subjects with documented reading disabilities. We obtained T1-weighted magnetic resonance images in the coronal and sagittal planes from 11 males with prior histories of developmental dyslexia, and nine similarly-aged male controls. Proton magnetic resonance spectra (TE=136 ms, TR=2.4 s) were obtained bilaterally in the cerebellum. Phonological decoding skill was measured using non-word reading. Handedness was assessed using both the Annett questionnaire of hand preference and Annett’s peg moving task. Cerebellar symmetry was observed in the dyslexics but there was significant asymmetry (right grey matter>left grey matter) in controls. The interpretation of these results depended whether a motor- or questionnaire-based method was used to determine handedness. The degree of cerebellar symmetry was correlated with the severity of dyslexics’ phonological decoding deficit. Those with more symmetric cerebella made more errors on a nonsense word reading measure of phonological decoding ability. Left cerebellar metabolite ratios were shown to correlate significantly with the degree of cerebellar asymmetry (P<0.05) in controls. This relationship was absent in developmental dyslexics. Cerebellar morphology reflects the higher degree of symmetry found previously in the temporal and parietal cortex of dyslexics. The relationship of cerebellar asymmetry to phonological decoding ability and handedness, together with our previous finding of altered metabolite ratios in the cerebellum of dyslexics, lead us to suggest that there are alterations in the neurological organisation of the cerebellum which relate to phonological decoding skills, in addition to motor skills and handedness.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective In this study, we have used a chemometrics-based method to correlate key liposomal adjuvant attributes with in-vivo immune responses based on multivariate analysis. Methods The liposomal adjuvant composed of the cationic lipid dimethyldioctadecylammonium bromide (DDA) and trehalose 6,6-dibehenate (TDB) was modified with 1,2-distearoyl-sn-glycero-3-phosphocholine at a range of mol% ratios, and the main liposomal characteristics (liposome size and zeta potential) was measured along with their immunological performance as an adjuvant for the novel, postexposure fusion tuberculosis vaccine, Ag85B-ESAT-6-Rv2660c (H56 vaccine). Partial least square regression analysis was applied to correlate and cluster liposomal adjuvants particle characteristics with in-vivo derived immunological performances (IgG, IgG1, IgG2b, spleen proliferation, IL-2, IL-5, IL-6, IL-10, IFN-γ). Key findings While a range of factors varied in the formulations, decreasing the 1,2-distearoyl-sn-glycero-3-phosphocholine content (and subsequent zeta potential) together built the strongest variables in the model. Enhanced DDA and TDB content (and subsequent zeta potential) stimulated a response skewed towards a cell mediated immunity, with the model identifying correlations with IFN-γ, IL-2 and IL-6. Conclusion This study demonstrates the application of chemometrics-based correlations and clustering, which can inform liposomal adjuvant design.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Signal processing is an important topic in technological research today. In the areas of nonlinear dynamics search, the endeavor to control or order chaos is an issue that has received increasing attention over the last few years. Increasing interest in neural networks composed of simple processing elements (neurons) has led to widespread use of such networks to control dynamic systems learning. This paper presents backpropagation-based neural network architecture that can be used as a controller to stabilize unsteady periodic orbits. It also presents a neural network-based method for transferring the dynamics among attractors, leading to more efficient system control. The procedure can be applied to every point of the basin, no matter how far away from the attractor they are. Finally, this paper shows how two mixed chaotic signals can be controlled using a backpropagation neural network as a filter to separate and control both signals at the same time. The neural network provides more effective control, overcoming the problems that arise with control feedback methods. Control is more effective because it can be applied to the system at any point, even if it is moving away from the target state, which prevents waiting times. Also control can be applied even if there is little information about the system and remains stable longer even in the presence of random dynamic noise.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Location estimation is important for wireless sensor network (WSN) applications. In this paper we propose a Cramer-Rao Bound (CRB) based analytical approach for two centralized multi-hop localization algorithms to get insights into the error performance and its sensitivity to the distance measurement error, anchor node density and placement. The location estimation performance is compared with four distributed multi-hop localization algorithms by simulation to evaluate the efficiency of the proposed analytical approach. The numerical results demonstrate the complex tradeoff between the centralized and distributed localization algorithms on accuracy, complexity and communication overhead. Based on this analysis, an efficient and scalable performance evaluation tool can be designed for localization algorithms in large scale WSNs, where simulation-based evaluation approaches are impractical. © 2013 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background - The intimate relationship between dogs and their owners has the potential to increase the risk of human exposure to bacterial pathogens. Over the past 40 years, there have been several reports on transmission of salmonellae from dogs to humans. This study therefore aimed to determine the prevalence of Salmonella in the faeces of dogs from the Midlands region of the United Kingdom to assess exposure risk and potential for zoonotic transmission. Results - A total of 436 apparently healthy dogs without diarrhoea from households (n = 126), rescue centres (n = 96), boarding kennels (n = 43), retired greyhound kennels (n = 39) and a pet nutrition facility (n = 132) were investigated for Salmonella shedding. Faecal samples were processed by an enrichment culture based method. The faeces from one dog (0.23 %; 95 % confidence limit 0.006 %, 1.27 %) was positive for Salmonella. The species was S. enterica subspecies arizonae. Conclusion - This study showed that the prevalence of Salmonella from faeces from apparently healthy dogs from a variety of housing conditions is low; however, Salmonella shedding was still identified.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays financial institutions due to regulation and internal motivations care more intensively on their risks. Besides previously dominating market and credit risk new trend is to handle operational risk systematically. Operational risk is the risk of loss resulting from inadequate or failed internal processes, people and systems or from external events. First we show the basic features of operational risk and its modelling and regulatory approaches, and after we will analyse operational risk in an own developed simulation model framework. Our approach is based on the analysis of latent risk process instead of manifest risk process, which widely popular in risk literature. In our model the latent risk process is a stochastic risk process, so called Ornstein- Uhlenbeck process, which is a mean reversion process. In the model framework we define catastrophe as breach of a critical barrier by the process. We analyse the distributions of catastrophe frequency, severity and first time to hit, not only for single process, but for dual process as well. Based on our first results we could not falsify the Poisson feature of frequency, and long tail feature of severity. Distribution of “first time to hit” requires more sophisticated analysis. At the end of paper we examine advantages of simulation based forecasting, and finally we concluding with the possible, further research directions to be done in the future.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Több mint száz éve született meg Henry Gantt (Gantt, 1910) sávos ütemterve, Kelley (Kelley, 1961) és Walker (Walker, 1959) is több mint hatvan éve publikálta kritikus út módszerét. Az ezekre épülő költség- és erőforrás- tervezési módszerek vajon alkalmasak-e a ma kihívásaira? Az olvasó ebben a tanulmányban többéves kutatómunka gyümölcsét láthatja. A kutatás során az egyik legfontosabb cél annak vizsgálata volt, hogy a meglévő projekttervezési eszközök mennyiben felelnek meg a mai projektek kihívásainak; hol és milyen területen van szükség e módszerek továbbfejlesztésére, esetleg meghaladására. Ebben a tanulmányban a szerző olyan módszereket mutat be, amelyek messze túlvezetnek bennünket a projekttervezés eddig elsősorban operatív feladatokra szorítkozó módszereitől, és olyan kérdések megválaszolására fordítja figyelmünket, mint pl. milyen tevékenységeket, projekteket valósítsunk meg; melyeket hagyjuk el vagy ütemezzük be egy későbbi projektbe; hogyan rangsoroljuk, priorizáljuk a projektek megvalósítását, fontosságát? ______ Gantt chart (Gantt, 1910) was born by Henry Gantt more than a hundred years ago. Kelley and Walker published their critical planning method more than a 60 years ago (see i.e. Kelley-Walker, 1959). Can we use methods based on network planning methods for the challenges of 21st century? In this paper the author can see the results of the recent researches. In this study with their colleagues he investigated which project planning methods can be used in challenges of the 21st century and where and how to improve them. In these researches new matrix-based project planning methods are specified, where they can deal not only operative but strategic questions: which subprojects/tasks should be completed, how to treat priorities of completion in case of defining logic planning, how to support not only traditional but agile project management approaches.In this paper he introduces a new matrix-based method, which can be used for ranking project or multi project scenarios with different kinds of target functions. The author shows methods that are used in an expert module. He shows how to integrate this expert module into the traditional PMS system.