838 resultados para Automated algorithms


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To examine the relationship of functional measurements with structural measures. Methods: 146 eyes of 83 test subjects underwent Heidelberg Retinal Tomography (HRTIII) (disc area<2.43, mphsd<40), and perimetry testing with Octopus (SAP; Dynamic), Pulsar (PP; TOP) and Moorfields MDT (ESTA). Glaucoma was defined as progressive structural or functional loss (20 eyes). Perimetry test points were grouped into 6 sectors based on the estimated optic nerve head angle into which the associated nerve fiber bundle enters (Garway-Heath map). Perimetry summary measures (PSM) (MD SAP/ MD PP/ PTD MDT) were calculated from the average total deviation of each measured threshold from the normal for each sector. We calculated the 95% significance level of the sectorial PSM from the respective normative data. We calculated the percentage agreement with group1 (G1), healthy on HRT and within normal perimetric limits, and group 2 (G2), abnormal on HRT and outside normal perimetric limits. We also examined the relationship of PSM and rim area (RA) in those sectors classified as abnormal by MRA (Moorfields Regression Analysis) of HRT. Results: The mean age was 65 (range= [37, 89]). The global sensitivity versus specificity of each instrument in detecting glaucomatous eyes was: MDT 80% vs. 88%, SAP 80% vs. 80%, PP 70% vs. 89% and HRT 80% vs. 79%. Highest percentage agreement of HRT (respectively G1, G2, sector) with PSM were MDT (89%, 57%, nasal superior), SAP (83%, 74%, temporal superior), PP (74%, 63%, nasal superior). Globally percentage agreement (respectively G1, G2) was MDT (92%, 28%), SAP (87%, 40%) and PP (77%, 49%). Linear regression showed there was no significant trend globally associating RA and PSM. However, sectorally the supero-nasal sector had a statistically significant (p<0.001) trend with each instrument, the associated r2 coefficients are (MDT 0.38 SAP 0.56 and PP 0.39). Conclusions: There were no significant differences in global sensitivity or specificity between instruments. Structure-function relationships varied significantly between instruments and were consistently strongest supero-nasally. Further studies are required to investigate these relationships in detail.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: EEG and somatosensory evoked potential are highly predictive of poor outcome after cardiac arrest; their accuracy for good recovery is however low. We evaluated whether addition of an automated mismatch negativity-based auditory discrimination paradigm (ADP) to EEG and somatosensory evoked potential improves prediction of awakening. METHODS: EEG and ADP were prospectively recorded in 30 adults during therapeutic hypothermia and in normothermia. We studied the progression of auditory discrimination on single-trial multivariate analyses from therapeutic hypothermia to normothermia, and its correlation to outcome at 3 months, assessed with cerebral performance categories. RESULTS: At 3 months, 18 of 30 patients (60%) survived; 5 had severe neurologic impairment (cerebral performance categories = 3) and 13 had good recovery (cerebral performance categories = 1-2). All 10 subjects showing improvements of auditory discrimination from therapeutic hypothermia to normothermia regained consciousness: ADP was 100% predictive for awakening. The addition of ADP significantly improved mortality prediction (area under the curve, 0.77 for standard model including clinical examination, EEG, somatosensory evoked potential, versus 0.86 after adding ADP, P = 0.02). CONCLUSIONS: This automated ADP significantly improves early coma prognostic accuracy after cardiac arrest and therapeutic hypothermia. The progression of auditory discrimination is strongly predictive of favorable recovery and appears complementary to existing prognosticators of poor outcome. Before routine implementation, validation on larger cohorts is warranted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Revenue management (RM) is a complicated business process that can best be described ascontrol of sales (using prices, restrictions, or capacity), usually using software as a tool to aiddecisions. RM software can play a mere informative role, supplying analysts with formatted andsummarized data who use it to make control decisions (setting a price or allocating capacity fora price point), or, play a deeper role, automating the decisions process completely, at the otherextreme. The RM models and algorithms in the academic literature by and large concentrateon the latter, completely automated, level of functionality.A firm considering using a new RM model or RM system needs to evaluate its performance.Academic papers justify the performance of their models using simulations, where customerbooking requests are simulated according to some process and model, and the revenue perfor-mance of the algorithm compared to an alternate set of algorithms. Such simulations, whilean accepted part of the academic literature, and indeed providing research insight, often lackcredibility with management. Even methodologically, they are usually awed, as the simula-tions only test \within-model" performance, and say nothing as to the appropriateness of themodel in the first place. Even simulations that test against alternate models or competition arelimited by their inherent necessity on fixing some model as the universe for their testing. Theseproblems are exacerbated with RM models that attempt to model customer purchase behav-ior or competition, as the right models for competitive actions or customer purchases remainsomewhat of a mystery, or at least with no consensus on their validity.How then to validate a model? Putting it another way, we want to show that a particularmodel or algorithm is the cause of a certain improvement to the RM process compared to theexisting process. We take care to emphasize that we want to prove the said model as the causeof performance, and to compare against a (incumbent) process rather than against an alternatemodel.In this paper we describe a \live" testing experiment that we conducted at Iberia Airlineson a set of flights. A set of competing algorithms control a set of flights during adjacentweeks, and their behavior and results are observed over a relatively long period of time (9months). In parallel, a group of control flights were managed using the traditional mix of manualand algorithmic control (incumbent system). Such \sandbox" testing, while common at manylarge internet search and e-commerce companies is relatively rare in the revenue managementarea. Sandbox testing has an undisputable model of customer behavior but the experimentaldesign and analysis of results is less clear. In this paper we describe the philosophy behind theexperiment, the organizational challenges, the design and setup of the experiment, and outlinethe analysis of the results. This paper is a complement to a (more technical) related paper thatdescribes the econometrics and statistical analysis of the results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Sedation and therapeutic hypothermia (TH) delay neurological responses and might reduce the accuracy of clinical examination to predict outcome after cardiac arrest (CA). We examined the accuracy of quantitative pupillary light reactivity (PLR), using an automated infrared pupillometry, to predict outcome of post-CA coma in comparison to standard PLR, EEG, and somato-sensory evoked potentials (SSEP). METHODS: We prospectively studied over a 1-year period (June 2012-June 2013) 50 consecutive comatose CA patients treated with TH (33 °C, 24 h). Quantitative PLR (expressed as the % of pupillary response to a calibrated light stimulus) and standard PLR were measured at day 1 (TH and sedation; on average 16 h after CA) and day 2 (normothermia, off sedation: on average 46 h after CA). Neurological outcome was assessed at 90 days with Cerebral Performance Categories (CPC), dichotomized as good (CPC 1-2) versus poor (CPC 3-5). Predictive performance was analyzed using area under the ROC curves (AUC). RESULTS: Patients with good outcome [n = 23 (46 %)] had higher quantitative PLR than those with poor outcome [n = 27; 16 (range 9-23) vs. 10 (1-30) % at day 1, and 20 (13-39) vs. 11 (1-55) % at day 2, both p < 0.001]. Best cut-off for outcome prediction of quantitative PLR was <13 %. The AUC to predict poor outcome was higher for quantitative than for standard PLR at both time points (day 1, 0.79 vs. 0.56, p = 0.005; day 2, 0.81 vs. 0.64, p = 0.006). Prognostic accuracy of quantitative PLR was comparable to that of EEG and SSEP (0.81 vs. 0.80 and 0.73, respectively, both p > 0.20). CONCLUSIONS: Quantitative PLR is more accurate than standard PLR in predicting outcome of post-anoxic coma, irrespective of temperature and sedation, and has comparable prognostic accuracy than EEG and SSEP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PRECON S.A is a manufacturing company dedicated to produce prefabricatedconcrete parts to several industries as rail transportation andagricultural industries.Recently, PRECON signed a contract with RENFE,the Spanish Nnational Rail Transportation Company to manufacturepre-stressed concrete sleepers for siding of the new railways of the highspeed train AVE. The scheduling problem associated with the manufacturingprocess of the sleepers is very complex since it involves severalconstraints and objectives. The constraints are related with productioncapacity, the quantity of available moulds, satisfying demand and otheroperational constraints. The two main objectives are related withmaximizing the usage of the manufacturing resources and minimizing themoulds movements. We developed a deterministic crowding genetic algorithmfor this multiobjective problem. The algorithm has proved to be a powerfuland flexible tool to solve the large-scale instance of this complex realscheduling problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a method for brain atlas deformation in the presence of large space-occupying tumors, based on an a priori model of lesion growth that assumes radial expansion of the lesion from its starting point. Our approach involves three steps. First, an affine registration brings the atlas and the patient into global correspondence. Then, the seeding of a synthetic tumor into the brain atlas provides a template for the lesion. The last step is the deformation of the seeded atlas, combining a method derived from optical flow principles and a model of lesion growth. Results show that a good registration is performed and that the method can be applied to automatic segmentation of structures and substructures in brains with gross deformation, with important medical applications in neurosurgery, radiosurgery, and radiotherapy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, several anonymization algorithms have appeared for privacy preservation on graphs. Some of them are based on random-ization techniques and on k-anonymity concepts. We can use both of them to obtain an anonymized graph with a given k-anonymity value. In this paper we compare algorithms based on both techniques in orderto obtain an anonymized graph with a desired k-anonymity value. We want to analyze the complexity of these methods to generate anonymized graphs and the quality of the resulting graphs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Visual analysis of electroencephalography (EEG) background and reactivity during therapeutic hypothermia provides important outcome information, but is time-consuming and not always consistent between reviewers. Automated EEG analysis may help quantify the brain damage. Forty-six comatose patients in therapeutic hypothermia, after cardiac arrest, were included in the study. EEG background was quantified with burst-suppression ratio (BSR) and approximate entropy, both used to monitor anesthesia. Reactivity was detected through change in the power spectrum of signal before and after stimulation. Automatic results obtained almost perfect agreement (discontinuity) to substantial agreement (background reactivity) with a visual score from EEG-certified neurologists. Burst-suppression ratio was more suited to distinguish continuous EEG background from burst-suppression than approximate entropy in this specific population. Automatic EEG background and reactivity measures were significantly related to good and poor outcome. We conclude that quantitative EEG measurements can provide promising information regarding current state of the patient and clinical outcome, but further work is needed before routine application in a clinical setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The high complexity of cortical convolutions in humans is very challenging both for engineers to measure and compare it, and for biologists and physicians to understand it. In this paper, we propose a surface-based method for the quantification of cortical gyrification. Our method uses accurate 3-D cortical reconstruction and computes local measurements of gyrification at thousands of points over the whole cortical surface. The potential of our method to identify and localize precisely gyral abnormalities is illustrated by a clinical study on a group of children affected by 22q11 Deletion Syndrome, compared to control individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A avaliação de terras é o processo que permite estimar o uso potencial da terra com base em seus atributos. Grande variedade de modelos analíticos pode ser usada neste processo. No Brasil, os dois sistemas de avaliação das terras mais utilizados são o Sistema de Classificação da Capacidade de Uso da Terra e o Sistema FAO/Brasileiro de Aptidão Agrícola das Terras. Embora difiram em vários aspectos, ambos exigem o cruzamento de inúmeras variáveis ambientais. O ALES (Automated Land Evaluation System) é um programa de computador que permite construir sistemas especialistas para avaliação de terras. As entidades avaliadas pelo ALES são as unidades de mapeamento, as quais podem ser de caráter generalizado ou detalhado. A área objeto desta avaliação é composta pelas microrregiões de Chapecó e Xanxerê, no Oeste catarinense, e engloba 54 municípios. Os dados sobre os solos e sobre as características da paisagem foram obtidos no levantamento de reconhecimento dos solos do Estado, na escala de 1:250.000. O presente estudo desenvolveu o sistema especialista ATOSC (Avaliação das Terras do Oeste de Santa Catarina) e, na sua construção, incluiu-se a definição dos requerimentos dos tipos de utilização da terra, bem como foi feita a subseqüente comparação destes com os atributos de cada unidade de mapeamento. Os tipos de utilização da terra considerados foram: feijão, milho, soja e trigo, em cultivos solteiros, sob condições de sequeiro e de manejo característicos destas culturas no Estado. As informações sobre os recursos naturais compreendem os atributos climáticos, de solos e das condições da paisagem que interferem na produção destas culturas. Para cada tipo de utilização da terra foram especificados, no ATOSC, o código, o nome e seus respectivos requerimentos de uso da terra. Os requerimentos de cada cultura foram definidos por uma combinação específica das características das terras selecionadas, que determina o nível de severidade de cada um deles em relação à cultura. Estabeleceram-se quatro níveis de severidade que indicam aumento do grau de limitação ou diminuição do potencial para determinado tipo de uso da terra, a saber: limitação nula ou ligeira (favorável); limitação moderada (moderadamente favorável), limitação forte (pouco favorável); e limitação muito forte (desfavorável). Na árvore de decisão, componente básico do sistema especialista, são implementadas as regras que permitirão o enquadramento das terras em classes de adequação definidas, baseado na qualidade dos requerimentos de acordo com o tipo de uso. O ATOSC facilitou o processo de comparação entre as características das terras das microrregiões de Chapecó e Xanxerê e os requerimentos de uso considerados, por permitir efetuar automaticamente a avaliação das terras, reduzindo, assim, o tempo gasto neste processo. As terras das microrregiões de Chapecó e Xanxerê foram enquadradas, em sua maior parte, nas classes de adequação pouco favorável (3) e desfavorável (4) para os cultivos considerados. Os principais fatores limitantes identificados nestas microrregiões foram a fertilidade natural e o risco de erosão, para o feijão e o milho, e condições de mecanização e risco de erosão, para a soja e o trigo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ABSTRACT: BACKGROUND: Serologic testing algorithms for recent HIV seroconversion (STARHS) provide important information for HIV surveillance. We have shown that a patient's antibody reaction in a confirmatory line immunoassay (INNO-LIATM HIV I/II Score, Innogenetics) provides information on the duration of infection. Here, we sought to further investigate the diagnostic specificity of various Inno-Lia algorithms and to identify factors affecting it. METHODS: Plasma samples of 714 selected patients of the Swiss HIV Cohort Study infected for longer than 12 months and representing all viral clades and stages of chronic HIV-1 infection were tested blindly by Inno-Lia and classified as either incident (up to 12 m) or older infection by 24 different algorithms. Of the total, 524 patients received HAART, 308 had HIV-1 RNA below 50 copies/mL, and 620 were infected by a HIV-1 non-B clade. Using logistic regression analysis we evaluated factors that might affect the specificity of these algorithms. RESULTS: HIV-1 RNA <50 copies/mL was associated with significantly lower reactivity to all five HIV-1 antigens of the Inno-Lia and impaired specificity of most algorithms. Among 412 patients either untreated or with HIV-1 RNA ≥50 copies/mL despite HAART, the median specificity of the algorithms was 96.5% (range 92.0-100%). The only factor that significantly promoted false-incident results in this group was age, with false-incident results increasing by a few percent per additional year. HIV-1 clade, HIV-1 RNA, CD4 percentage, sex, disease stage, and testing modalities exhibited no significance. Results were similar among 190 untreated patients. CONCLUSIONS: The specificity of most Inno-Lia algorithms was high and not affected by HIV-1 variability, advanced disease and other factors promoting false-recent results in other STARHS. Specificity should be good in any group of untreated HIV-1 patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chlamydia serology is indicated to investigate etiology of miscarriage, infertility, pelvic inflammatory disease, and ectopic pregnancy. Here, we assessed the reliability of a new automated-multiplex immunofluorescence assay (InoDiag test) to detect specific anti-C. trachomatis immunoglobulin G. Considering immunofluorescence assay (IF) as gold standard, InoDiag tests exhibited similar sensitivities (65.5%) but better specificities (95.1%-98%) than enzyme-linked immunosorbent assays (ELISAs). InoDiag tests demonstrated similar or lower cross-reactivity rates when compared to ELISA or IF.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Difficult tracheal intubation remains a constant and significant source of morbidity and mortality in anaesthetic practice. Insufficient airway assessment in the preoperative period continues to be a major cause of unanticipated difficult intubation. Although many risk factors have already been identified, preoperative airway evaluation is not always regarded as a standard procedure and the respective weight of each risk factor remains unclear. Moreover the predictive scores available are not sensitive, moderately specific and often operator-dependant. In order to improve the preoperative detection of patients at risk for difficult intubation, we developed a system for automated and objective evaluation of morphologic criteria of the face and neck using video recordings and advanced techniques borrowed from face recognition. Method and results: Frontal video sequences were recorded in 5 healthy volunteers. During the video recording, subjects were requested to perform maximal flexion-extension of the neck and to open wide the mouth with tongue pulled out. A robust and real-time face tracking system was then applied, allowing to automatically identify and map a grid of 55 control points on the face, which were tracked during head motion. These points located important features of the face, such as the eyebrows, the nose, the contours of the eyes and mouth, and the external contours, including the chin. Moreover, based on this face tracking, the orientation of the head could also be estimated at each frame of the video sequence. Thus, we could infer for each frame the pitch angle of the head pose (related to the vertical rotation of the head) and obtain the degree of head extension. Morphological criteria used in the most frequent cited predictive scores were also extracted, such as mouth opening, degree of visibility of the uvula or thyreo-mental distance. Discussion and conclusion: Preliminary results suggest the high feasibility of the technique. The next step will be the application of the same automated and objective evaluation to patients who will undergo tracheal intubation. The difficulties related to intubation will be then correlated to the biometric characteristics of the patients. The objective in mind is to analyze the biometrics data with artificial intelligence algorithms to build a highly sensitive and specific predictive test.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: The estimation of blood pressure is dependent on the accuracy of the measurement devices. We compared blood pressure readings obtained with an automated oscillometric arm-cuff device and with an automated oscillometric wrist-cuff device and then assessed the prevalence of defined blood pressure categories. METHODS: Within a population-based survey in Dar es Salaam (Tanzania), we selected all participants with a blood pressure >/= 160/95 mmHg (n=653) and a random sample of participants with blood pressure <160/95 mmHg (n=662), based on the first blood pressure reading. Blood pressure was reassessed 2 years later for 464 and 410 of the participants, respectively. In these 874 subjects, we compared the prevalence of blood pressure categories as estimated with each device. RESULTS: Overall, the wrist device gave higher blood pressure readings than the arm device (difference in systolic/diastolic blood pressure: 6.3 +/- 17.3/3.7 +/- 11.8 mmHg, P<0.001). However, the arm device tended to give lower readings than the wrist device for high blood pressure values. The prevalence of blood pressure categories differed substantially depending on which device was used, 29% and 14% for blood pressure <120/80 mmHg (arm device versus wrist device, respectively), 30% and 33% for blood pressure 120-139/80-89 mmHg, 17% and 26% for blood pressure 140-159/90-99 mmHg, 12% and 13% for blood pressure 160-179/100-109 mmHg and 13% and 14% for blood pressure >/= 180/110 mmHg. CONCLUSIONS: A large discrepancy in the estimated prevalence of blood pressure categories was observed using two different automatic measurement devices. This emphasizes that prevalence estimates based on automatic devices should be considered with caution.