857 resultados para Facial Object Based Method
Resumo:
Hazard and operability (HAZOP) studies on chemical process plants are very time consuming, and often tedious, tasks. The requirement for HAZOP studies is that a team of experts systematically analyse every conceivable process deviation, identifying possible causes and any hazards that may result. The systematic nature of the task, and the fact that some team members may be unoccupied for much of the time, can lead to tedium, which in turn may lead to serious errors or omissions. An aid to HAZOP are fault trees, which present the system failure logic graphically such that the study team can readily assimilate their findings. Fault trees are also useful to the identification of design weaknesses, and may additionally be used to estimate the likelihood of hazardous events occurring. The one drawback of fault trees is that they are difficult to generate by hand. This is because of the sheer size and complexity of modern process plants. The work in this thesis proposed a computer-based method to aid the development of fault trees for chemical process plants. The aim is to produce concise, structured fault trees that are easy for analysts to understand. Standard plant input-output equation models for major process units are modified such that they include ancillary units and pipework. This results in a reduction in the nodes required to represent a plant. Control loops and protective systems are modelled as operators which act on process variables. This modelling maintains the functionality of loops, making fault tree generation easier and improving the structure of the fault trees produced. A method, called event ordering, is proposed which allows the magnitude of deviations of controlled or measured variables to be defined in terms of the control loops and protective systems with which they are associated.
Resumo:
Oral drug delivery is considered the most popular route of delivery because of the ease of administration, availability of a wide range of dosage forms and the large surface area for drug absorption via the intestinal membrane. However, besides the unfavourable biopharmaceutical properties of the therapeutic agents, efflux transporters such as Pglycoprotein (P-gp) and multiple resistance proteins (MRP) decrease the overall drug uptake by extruding the drug from the cells. Although, prodrugs have been investigated to improve drug partitioning by masking the polar groups covalently with pre-moieties promoting increased uptake, they present significant challenges including reduced solubility and increased toxicity. The current work investigates the use of amino acids as ion-pairs for three model drugs: indomethacin (weak acid), trimethoprim (weak base) and ciprofloxacin (zwitter ion) in an attempt to improve both solubility and uptake. Solubility was studied by salt formation while creating new routes for uptake across the membranes via amino acids transporter proteins or dipeptidyl transporters was the rationale to enhance absorption. New salts were prepared for the model drugs and the oppositely charged amino acids by freeze drying and they were characterised using FTIR, 1HNMR, DSC, SEM, pH solubility profile, solubility and dissolution. Permeability profiles were assessed using an in vitro cell based method; Caco-2 cells and the genetic changes occurring across the transporter genes and various pathways involved in the cellular activities were studied using DNA microarrays. Solubility data showed a significant increase in drug solubility upon preparing the new salts with the oppositely charged counter ions (ciprofloxacin glutamate salt exhibiting 2.9x103 fold enhancement when compared to the free drug). Moreover, permeability studies showed a 3 fold increase in trimethoprim and indomethacin permeabilities upon ion-pairing with amino acids and more than 10 fold when the zwitter ionic drug was paired with glutamic acid. Microarray data revealed that trimethoprim was absorbed actively via OCTN1 transporters while MRP7 is the main transporter gene that mediates its efflux. The absorption of trimethoprim from trimethoprim glutamic acid ion-paired formulations was affected by the ratio of glutamic acid in the formulation which was inversely proportional to the degree of expression of OCTN1. Interestingly, ciprofloxacin glutamic acid ion-pairs were found to decrease the up-regulation of ciprofloxacin efflux proteins (P-gp and MRP4) and over-express two solute carrier transporters; (PEPT2 and SLCO1A2) suggesting that a high aqueous binding constant (K11aq) enables the ion-paired formulations to be absorbed as one entity. In conclusion, formation of ion-pairs with amino acids can influence in a positive way solubility, transfer and gene expression effects of drugs.
Resumo:
Purpose: (1) To devise a model-based method for estimating the probabilities of binocular fusion, interocular suppression and diplopia from psychophysical judgements, (2) To map out the way fusion, suppression and diplopia vary with binocular disparity and blur of single edges shown to each eye, (3) To compare the binocular interactions found for edges of the same vs opposite contrast polarity. Methods: Test images were single, horizontal, Gaussian-blurred edges, with blur B = 1-32 min arc, and vertical disparity 0-8.B, shown for 200 ms. In the main experiment, observers reported whether they saw one central edge, one offset edge, or two edges. We argue that the relation between these three response categories and the three perceptual states (fusion, suppression, diplopia) is indirect and likely to be distorted by positional noise and criterion effects, and so we developed a descriptive, probabilistic model to estimate both the perceptual states and the noise/criterion parameters from the data. Results: (1) Using simulated data, we validated the model-based method by showing that it recovered fairly accurately the disparity ranges for fusion and suppression, (2) The disparity range for fusion (Panum's limit) increased greatly with blur, in line with previous studies. The disparity range for suppression was similar to the fusion limit at large blurs, but two or three times the fusion limit at small blurs. This meant that diplopia was much more prevalent at larger blurs, (3) Diplopia was much more frequent when the two edges had opposite contrast polarity. A formal comparison of models indicated that fusion occurs for same, but not opposite, polarities. Probability of suppression was greater for unequal contrasts, and it was always the lower-contrast edge that was suppressed. Conclusions: Our model-based data analysis offers a useful tool for probing binocular fusion and suppression psychophysically. The disparity range for fusion increased with edge blur but fell short of complete scale-invariance. The disparity range for suppression also increased with blur but was not close to scale-invariance. Single vision occurs through fusion, but also beyond the fusion range, through suppression. Thus suppression can serve as a mechanism for extending single vision to larger disparities, but mainly for sharper edges where the fusion range is small (5-10 min arc). For large blurs the fusion range is so much larger that no such extension may be needed. © 2014 The College of Optometrists.
Resumo:
A statistics-based method using genetic algorithms for predicting discrete sequences is presented. The prediction of the next value is based upon a fixed number of previous values and the statistics offered by the training data. According to the statistics, in similar past cases different values occurred next. If these values are considered with the appropriate weights, the forecast is successful. Weights are generated by genetic algorithms.
Resumo:
Recent evidence has suggested cerebellar anomalies in developmental dyslexia. Therefore, we investigated cerebellar morphology in subjects with documented reading disabilities. We obtained T1-weighted magnetic resonance images in the coronal and sagittal planes from 11 males with prior histories of developmental dyslexia, and nine similarly-aged male controls. Proton magnetic resonance spectra (TE=136 ms, TR=2.4 s) were obtained bilaterally in the cerebellum. Phonological decoding skill was measured using non-word reading. Handedness was assessed using both the Annett questionnaire of hand preference and Annett’s peg moving task. Cerebellar symmetry was observed in the dyslexics but there was significant asymmetry (right grey matter>left grey matter) in controls. The interpretation of these results depended whether a motor- or questionnaire-based method was used to determine handedness. The degree of cerebellar symmetry was correlated with the severity of dyslexics’ phonological decoding deficit. Those with more symmetric cerebella made more errors on a nonsense word reading measure of phonological decoding ability. Left cerebellar metabolite ratios were shown to correlate significantly with the degree of cerebellar asymmetry (P<0.05) in controls. This relationship was absent in developmental dyslexics. Cerebellar morphology reflects the higher degree of symmetry found previously in the temporal and parietal cortex of dyslexics. The relationship of cerebellar asymmetry to phonological decoding ability and handedness, together with our previous finding of altered metabolite ratios in the cerebellum of dyslexics, lead us to suggest that there are alterations in the neurological organisation of the cerebellum which relate to phonological decoding skills, in addition to motor skills and handedness.
Resumo:
Objective In this study, we have used a chemometrics-based method to correlate key liposomal adjuvant attributes with in-vivo immune responses based on multivariate analysis. Methods The liposomal adjuvant composed of the cationic lipid dimethyldioctadecylammonium bromide (DDA) and trehalose 6,6-dibehenate (TDB) was modified with 1,2-distearoyl-sn-glycero-3-phosphocholine at a range of mol% ratios, and the main liposomal characteristics (liposome size and zeta potential) was measured along with their immunological performance as an adjuvant for the novel, postexposure fusion tuberculosis vaccine, Ag85B-ESAT-6-Rv2660c (H56 vaccine). Partial least square regression analysis was applied to correlate and cluster liposomal adjuvants particle characteristics with in-vivo derived immunological performances (IgG, IgG1, IgG2b, spleen proliferation, IL-2, IL-5, IL-6, IL-10, IFN-γ). Key findings While a range of factors varied in the formulations, decreasing the 1,2-distearoyl-sn-glycero-3-phosphocholine content (and subsequent zeta potential) together built the strongest variables in the model. Enhanced DDA and TDB content (and subsequent zeta potential) stimulated a response skewed towards a cell mediated immunity, with the model identifying correlations with IFN-γ, IL-2 and IL-6. Conclusion This study demonstrates the application of chemometrics-based correlations and clustering, which can inform liposomal adjuvant design.
Resumo:
Signal processing is an important topic in technological research today. In the areas of nonlinear dynamics search, the endeavor to control or order chaos is an issue that has received increasing attention over the last few years. Increasing interest in neural networks composed of simple processing elements (neurons) has led to widespread use of such networks to control dynamic systems learning. This paper presents backpropagation-based neural network architecture that can be used as a controller to stabilize unsteady periodic orbits. It also presents a neural network-based method for transferring the dynamics among attractors, leading to more efficient system control. The procedure can be applied to every point of the basin, no matter how far away from the attractor they are. Finally, this paper shows how two mixed chaotic signals can be controlled using a backpropagation neural network as a filter to separate and control both signals at the same time. The neural network provides more effective control, overcoming the problems that arise with control feedback methods. Control is more effective because it can be applied to the system at any point, even if it is moving away from the target state, which prevents waiting times. Also control can be applied even if there is little information about the system and remains stable longer even in the presence of random dynamic noise.
Resumo:
Neli Maneva, Plamenka Hristova - The paper is devoted to a new approach to the extracurricular activities in Informatics for beginners, 3–5 grade pupils. Only the first step of our approach are described in detail, namely the modeling of the identified so far objects with prime and secondary importance. Some examples of objects are presented through their main characteristics revealing their peculiarities and the level of significance for the achievements of the stated goals for an efficient performance of the activities under consideration.
Resumo:
Background - The intimate relationship between dogs and their owners has the potential to increase the risk of human exposure to bacterial pathogens. Over the past 40 years, there have been several reports on transmission of salmonellae from dogs to humans. This study therefore aimed to determine the prevalence of Salmonella in the faeces of dogs from the Midlands region of the United Kingdom to assess exposure risk and potential for zoonotic transmission. Results - A total of 436 apparently healthy dogs without diarrhoea from households (n = 126), rescue centres (n = 96), boarding kennels (n = 43), retired greyhound kennels (n = 39) and a pet nutrition facility (n = 132) were investigated for Salmonella shedding. Faecal samples were processed by an enrichment culture based method. The faeces from one dog (0.23 %; 95 % confidence limit 0.006 %, 1.27 %) was positive for Salmonella. The species was S. enterica subspecies arizonae. Conclusion - This study showed that the prevalence of Salmonella from faeces from apparently healthy dogs from a variety of housing conditions is low; however, Salmonella shedding was still identified.
Resumo:
Több mint száz éve született meg Henry Gantt (Gantt, 1910) sávos ütemterve, Kelley (Kelley, 1961) és Walker (Walker, 1959) is több mint hatvan éve publikálta kritikus út módszerét. Az ezekre épülő költség- és erőforrás- tervezési módszerek vajon alkalmasak-e a ma kihívásaira? Az olvasó ebben a tanulmányban többéves kutatómunka gyümölcsét láthatja. A kutatás során az egyik legfontosabb cél annak vizsgálata volt, hogy a meglévő projekttervezési eszközök mennyiben felelnek meg a mai projektek kihívásainak; hol és milyen területen van szükség e módszerek továbbfejlesztésére, esetleg meghaladására. Ebben a tanulmányban a szerző olyan módszereket mutat be, amelyek messze túlvezetnek bennünket a projekttervezés eddig elsősorban operatív feladatokra szorítkozó módszereitől, és olyan kérdések megválaszolására fordítja figyelmünket, mint pl. milyen tevékenységeket, projekteket valósítsunk meg; melyeket hagyjuk el vagy ütemezzük be egy későbbi projektbe; hogyan rangsoroljuk, priorizáljuk a projektek megvalósítását, fontosságát? ______ Gantt chart (Gantt, 1910) was born by Henry Gantt more than a hundred years ago. Kelley and Walker published their critical planning method more than a 60 years ago (see i.e. Kelley-Walker, 1959). Can we use methods based on network planning methods for the challenges of 21st century? In this paper the author can see the results of the recent researches. In this study with their colleagues he investigated which project planning methods can be used in challenges of the 21st century and where and how to improve them. In these researches new matrix-based project planning methods are specified, where they can deal not only operative but strategic questions: which subprojects/tasks should be completed, how to treat priorities of completion in case of defining logic planning, how to support not only traditional but agile project management approaches.In this paper he introduces a new matrix-based method, which can be used for ranking project or multi project scenarios with different kinds of target functions. The author shows methods that are used in an expert module. He shows how to integrate this expert module into the traditional PMS system.
Resumo:
The nation's freeway systems are becoming increasingly congested. A major contribution to traffic congestion on freeways is due to traffic incidents. Traffic incidents are non-recurring events such as accidents or stranded vehicles that cause a temporary roadway capacity reduction, and they can account for as much as 60 percent of all traffic congestion on freeways. One major freeway incident management strategy involves diverting traffic to avoid incident locations by relaying timely information through Intelligent Transportation Systems (ITS) devices such as dynamic message signs or real-time traveler information systems. The decision to divert traffic depends foremost on the expected duration of an incident, which is difficult to predict. In addition, the duration of an incident is affected by many contributing factors. Determining and understanding these factors can help the process of identifying and developing better strategies to reduce incident durations and alleviate traffic congestion. A number of research studies have attempted to develop models to predict incident durations, yet with limited success. ^ This dissertation research attempts to improve on this previous effort by applying data mining techniques to a comprehensive incident database maintained by the District 4 ITS Office of the Florida Department of Transportation (FDOT). Two categories of incident duration prediction models were developed: "offline" models designed for use in the performance evaluation of incident management programs, and "online" models for real-time prediction of incident duration to aid in the decision making of traffic diversion in the event of an ongoing incident. Multiple data mining analysis techniques were applied and evaluated in the research. The multiple linear regression analysis and decision tree based method were applied to develop the offline models, and the rule-based method and a tree algorithm called M5P were used to develop the online models. ^ The results show that the models in general can achieve high prediction accuracy within acceptable time intervals of the actual durations. The research also identifies some new contributing factors that have not been examined in past studies. As part of the research effort, software code was developed to implement the models in the existing software system of District 4 FDOT for actual applications. ^
Resumo:
The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.
Resumo:
The study aims to examine the methodology of realistic simulation as facilitator of the teaching-learning process in nursing, and is justified by the possibility to propose conditions that envisage improvements in the training process with a view to assess the impacts attributed to new teaching strategies and learning in the formative areas of health and nursing. Descriptive study with quantitative and qualitative approach, as action research, and focus on teaching from the realistic simulation of Nursing in Primary Care in an institution of public higher education. . The research was developed in the Comprehensive Care Health discipline II, this is offered in the third year of the course in order to prepare the nursing student to the stage of Primary Health Care The study population comprised 40 subjects: 37 students and 3 teachers of that discipline. Data collection was held from February to May 2014 and was performed by using questionnaires and semi structured interviews. To do so, we followed the following sequence: identification of the use of simulation in the discipline target of intervention; consultation with professors about the possibility of implementing the survey; investigation of the syllabus of discipline, objectives, skills and abilities; preparing the plan for the execution of the intervention; preparing the checklist for skills training; construction and execution of simulation scenarios and evaluation of scenarios. Quantitative data were analyzed using simple descriptive statistics, percentage, and qualitative data through collective subject discourse. A high fidelity simulation was inserted in the curriculum of the course of the research object, based on the use of standard patient. Three cases were created and executed. In the students’ view, the simulation contributed to the synthesis of the contents worked at Integral Health Care II discipline (100%), scoring between 8 and 10 (100%) to executed scenarios. In addition, the simulation has generated a considerable percentage of high expectations for the activities of the discipline (70.27%) and is also shown as a strategy for generating student satisfaction (97.30%). Of the 97.30% that claimed to be quite satisfied with the activities proposed by the academic discipline of Integral Health Care II, 94.59% of the sample indicated the simulation as a determinant factor for the allocation of such gratification. Regarding the students' perception about the strategy of simulation, the most prominent category was the possibility of prior experience of practice (23.91%). The nervousness was one of the most cited negative aspects from the experience in simulated scenarios (50.0%). The most representative positive point (63.89%) pervades the idea of approximation with the reality of Primary Care. In addition, professors of the discipline, totaling 3, were trained in the methodology of the simulation. The study highlighted the contribution of realistic simulation in the context of teaching and learning in nursing and highlighted this strategy while mechanism to generate expectation and satisfaction among undergraduate nursing students
Resumo:
The goal of the power monitoring in electrical power systems is to promote the reliablility as well as the quality of electrical power.Therefore, this dissertation proposes a new theory of power based on wavelet transform for real-time estimation of RMS voltages and currents, and some power amounts, such as active power, reactive power, apparent power, and power factor. The appropriate estimation the of RMS and power values is important for many applications, such as: design and analysis of power systems, compensation devices for improving power quality, and instruments for energy measuring. Simulation and experimental results obtained through the proposed MaximalOverlap Discrete Wavelet Transform-based method were compared with the IEEE Standard 1459-2010 and the commercial oscilloscope, respectively, presenting equivalent results. The proposed method presented good performance for compact mother wavelet, which is in accordance with real-time applications.
Resumo:
The great interest in nonlinear system identification is mainly due to the fact that a large amount of real systems are complex and need to have their nonlinearities considered so that their models can be successfully used in applications of control, prediction, inference, among others. This work evaluates the application of Fuzzy Wavelet Neural Networks (FWNN) to identify nonlinear dynamical systems subjected to noise and outliers. Generally, these elements cause negative effects on the identification procedure, resulting in erroneous interpretations regarding the dynamical behavior of the system. The FWNN combines in a single structure the ability to deal with uncertainties of fuzzy logic, the multiresolution characteristics of wavelet theory and learning and generalization abilities of the artificial neural networks. Usually, the learning procedure of these neural networks is realized by a gradient based method, which uses the mean squared error as its cost function. This work proposes the replacement of this traditional function by an Information Theoretic Learning similarity measure, called correntropy. With the use of this similarity measure, higher order statistics can be considered during the FWNN training process. For this reason, this measure is more suitable for non-Gaussian error distributions and makes the training less sensitive to the presence of outliers. In order to evaluate this replacement, FWNN models are obtained in two identification case studies: a real nonlinear system, consisting of a multisection tank, and a simulated system based on a model of the human knee joint. The results demonstrate that the application of correntropy as the error backpropagation algorithm cost function makes the identification procedure using FWNN models more robust to outliers. However, this is only achieved if the gaussian kernel width of correntropy is properly adjusted.