965 resultados para Web-Assisted Error Detection


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: Develop recommendations for women's health issues and family planning in systemic lupus erythematosus (SLE) and/or antiphospholipid syndrome (APS). METHODS: Systematic review of evidence followed by modified Delphi method to compile questions, elicit expert opinions and reach consensus. RESULTS: Family planning should be discussed as early as possible after diagnosis. Most women can have successful pregnancies and measures can be taken to reduce the risks of adverse maternal or fetal outcomes. Risk stratification includes disease activity, autoantibody profile, previous vascular and pregnancy morbidity, hypertension and the use of drugs (emphasis on benefits from hydroxychloroquine and antiplatelets/anticoagulants). Hormonal contraception and menopause replacement therapy can be used in patients with stable/inactive disease and low risk of thrombosis. Fertility preservation with gonadotropin-releasing hormone analogues should be considered prior to the use of alkylating agents. Assisted reproduction techniques can be safely used in patients with stable/inactive disease; patients with positive antiphospholipid antibodies/APS should receive anticoagulation and/or low-dose aspirin. Assessment of disease activity, renal function and serological markers is important for diagnosing disease flares and monitoring for obstetrical adverse outcomes. Fetal monitoring includes Doppler ultrasonography and fetal biometry, particularly in the third trimester, to screen for placental insufficiency and small for gestational age fetuses. Screening for gynaecological malignancies is similar to the general population, with increased vigilance for cervical premalignant lesions if exposed to immunosuppressive drugs. Human papillomavirus immunisation can be used in women with stable/inactive disease. CONCLUSIONS: Recommendations for women's health issues in SLE and/or APS were developed using an evidence-based approach followed by expert consensus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Les problématiques de surplus de poids sont en augmentation depuis les dernières décennies, notamment chez les jeunes québécois. Cette augmentation est en lien avec des habitudes alimentaires présentant des différences importantes avec les recommandations nutritionnelles. De plus, le gouvernement provincial a instauré des changements importants au Programme de formation de l’école québécoise afin de stimuler l’adoption de saines habitudes de vie. Afin de contrer ces problématiques de surplus de poids et d’habitudes alimentaires déficientes et de poursuivre dans la lignée de la réforme scolaire, le Nutriathlon en équipe version Web a été développé. Ce programme a pour but d’amener chaque participant à améliorer la qualité de son alimentation en augmentant et en diversifiant sa consommation de légumes, de fruits et de produits laitiers. Les objectifs de la présente étude sont (1) d’évaluer l’impact du programme sur la consommation de légumes, de fruits (LF) et de produits laitiers (PL) d’élèves du secondaire et (2) d’évaluer les facteurs influençant la réussite du programme chez ces jeunes. Les résultats de l’étude ont démontré que pendant le programme ainsi qu’immédiatement après, le groupe intervention a rapporté une augmentation significative de la consommation de LF et de PL par rapport au groupe contrôle. Par contre, aucun effet n’a pu être observé à moyen terme. Quant aux facteurs facilitant le succès du Nutriathlon en équipe, les élèves ont mentionné : l’utilisation de la technologie pour la compilation des portions, la formation d’équipes, l’implication des enseignants et de l’entourage familial ainsi que la création de stratégies pour faciliter la réussite du programme. Les élèves ont également mentionné des barrières au succès du Nutriathlon en équipe telles que le manque d’assiduité à saisir leurs données en dehors des heures de classe, la dysfonction du code d’utilisateur et l’incompatibilité de la plateforme avec certains outils technologiques comme les tablettes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AIRES, Kelson R. T. ; ARAÚJO, Hélder J. ; MEDEIROS, Adelardo A. D. . Plane Detection from Monocular Image Sequences. In: VISUALIZATION, IMAGING AND IMAGE PROCESSING, 2008, Palma de Mallorca, Spain. Proceedings..., Palma de Mallorca: VIIP, 2008

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Incidental findings on low-dose CT images obtained during hybrid imaging are an increasing phenomenon as CT technology advances. Understanding the diagnostic value of incidental findings along with the technical limitations is important when reporting image results and recommending follow-up, which may result in an additional radiation dose from further diagnostic imaging and an increase in patient anxiety. This study assessed lesions incidentally detected on CT images acquired for attenuation correction on two SPECT/CT systems. Methods: An anthropomorphic chest phantom containing simulated lesions of varying size and density was imaged on an Infinia Hawkeye 4 and a Symbia T6 using the low-dose CT settings applied for attenuation correction acquisitions in myocardial perfusion imaging. Twenty-two interpreters assessed 46 images from each SPECT/CT system (15 normal images and 31 abnormal images; 41 lesions). Data were evaluated using a jackknife alternative free-response receiver-operating-characteristic analysis (JAFROC). Results: JAFROC analysis showed a significant difference (P < 0.0001) in lesion detection, with the figures of merit being 0.599 (95% confidence interval, 0.568, 0.631) and 0.810 (95% confidence interval, 0.781, 0.839) for the Infinia Hawkeye 4 and Symbia T6, respectively. Lesion detection on the Infinia Hawkeye 4 was generally limited to larger, higher-density lesions. The Symbia T6 allowed improved detection rates for midsized lesions and some lower-density lesions. However, interpreters struggled to detect small (5 mm) lesions on both image sets, irrespective of density. Conclusion: Lesion detection is more reliable on low-dose CT images from the Symbia T6 than from the Infinia Hawkeye 4. This phantom-based study gives an indication of potential lesion detection in the clinical context as shown by two commonly used SPECT/CT systems, which may assist the clinician in determining whether further diagnostic imaging is justified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SQL Injection Attack (SQLIA) remains a technique used by a computer network intruder to pilfer an organisation’s confidential data. This is done by an intruder re-crafting web form’s input and query strings used in web requests with malicious intent to compromise the security of an organisation’s confidential data stored at the back-end database. The database is the most valuable data source, and thus, intruders are unrelenting in constantly evolving new techniques to bypass the signature’s solutions currently provided in Web Application Firewalls (WAF) to mitigate SQLIA. There is therefore a need for an automated scalable methodology in the pre-processing of SQLIA features fit for a supervised learning model. However, obtaining a ready-made scalable dataset that is feature engineered with numerical attributes dataset items to train Artificial Neural Network (ANN) and Machine Leaning (ML) models is a known issue in applying artificial intelligence to effectively address ever evolving novel SQLIA signatures. This proposed approach applies numerical attributes encoding ontology to encode features (both legitimate web requests and SQLIA) to numerical data items as to extract scalable dataset for input to a supervised learning model in moving towards a ML SQLIA detection and prevention model. In numerical attributes encoding of features, the proposed model explores a hybrid of static and dynamic pattern matching by implementing a Non-Deterministic Finite Automaton (NFA). This combined with proxy and SQL parser Application Programming Interface (API) to intercept and parse web requests in transition to the back-end database. In developing a solution to address SQLIA, this model allows processed web requests at the proxy deemed to contain injected query string to be excluded from reaching the target back-end database. This paper is intended for evaluating the performance metrics of a dataset obtained by numerical encoding of features ontology in Microsoft Azure Machine Learning (MAML) studio using Two-Class Support Vector Machines (TCSVM) binary classifier. This methodology then forms the subject of the empirical evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider an LTE network where a secondary user acts as a relay, transmitting data to the primary user using a decode-and-forward mechanism, transparent to the base-station (eNodeB). Clearly, the relay can decode symbols more reliably if the employed precoder matrix indicators (PMIs) are known. However, for closed loop spatial multiplexing (CLSM) transmit mode, this information is not always embedded in the downlink signal, leading to a need for effective methods to determine the PMI. In this thesis, we consider 2x2 MIMO and 4x4 MIMO downlink channels corresponding to CLSM and formulate two techniques to estimate the PMI at the relay using a hypothesis testing framework. We evaluate their performance via simulations for various ITU channel models over a range of SNR and for different channel quality indicators (CQIs). We compare them to the case when the true PMI is known at the relay and show that the performance of the proposed schemes are within 2 dB at 10% block error rate (BLER) in almost all scenarios. Furthermore, the techniques add minimal computational overhead over existent receiver structure. Finally, we also identify scenarios when using the proposed precoder detection algorithms in conjunction with the cooperative decode-and-forward relaying mechanism benefits the PUE and improves the BLER performance for the PUE. Therefore, we conclude from this that the proposed algorithms as well as the cooperative relaying mechanism at the CMR can be gainfully employed in a variety of real-life scenarios in LTE networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Plants frequently suffer contaminations by toxigenic fungi, and their mycotoxins can be produced throughout growth, harvest, drying and storage periods. The objective of this work was to validate a method for detection of toxins in medicinal and aromatic plants, through a fast and highly sensitive method, optimizing the joint co-extraction of aflatoxins (AF: AFB1, AFB2, AFG1 and AFG2) and ochratoxin A (OTA) by using Aloysia citrodora P. (lemon verbena) as a case study. For optimization purposes, samples were spiked (n=3) with standard solutions of a mix of the four AFs and OTA at 10 ng/g for AFB1, AFG1 and OTA, and at 6 ng/g of AFB2 and AFG2. Several extraction procedures were tested: i) ultrasound-assisted extraction in sodium chloride and methanol/water (80:20, v/v) [(OTA+AFs)1]; ii) maceration in methanol/1% NaHCO3 (70:30, v/v) [(OTA+AFs)2]; iii) maceration in methanol/1% NaHCO3 (70:30, v/v) (OTA1); and iv) maceration in sodium chloride and methanol/water (80:20, v/v) (AF1). AF and OTA were purified using the mycotoxin-specific immunoaffinity columns AflaTest WB and OchraTest WB (VICAM), respectively. Separation was performed with a Merck Chromolith Performance C18 column (100 x 4.6 mm) by reverse-phase HPLC coupled to a fluorescence detector (FLD) and a photochemical derivatization system (for AF). The recoveries obtained from the spiked samples showed that the single-extraction methods (OTA1 and AF1) performed better than co-extraction methods. For in-house validation of the selected methods OTA1 and AF1, recovery and precision were determined (n=6). The recovery of OTA for method OTA1 was 81%, and intermediate precision (RSDint) was 1.1%. The recoveries of AFB1, AFB2, AFG1 and AFG2 ranged from 64% to 110% for method AF1, with RSDint lower than 5%. Methods OTA1 and AF1 showed precision and recoveries within the legislated values and were found to be suitable for the extraction of OTA and AF for the matrix under study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article is concerned with the numerical detection of bifurcation points of nonlinear partial differential equations as some parameter of interest is varied. In particular, we study in detail the numerical approximation of the Bratu problem, based on exploiting the symmetric version of the interior penalty discontinuous Galerkin finite element method. A framework for a posteriori control of the discretization error in the computed critical parameter value is developed based upon the application of the dual weighted residual (DWR) approach. Numerical experiments are presented to highlight the practical performance of the proposed a posteriori error estimator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A better method for determination of shikimate in plant tissues is needed to monitor exposure of plants to the herbicide glyphosate [N-(phosphonomethyl)glycine] and to screen the plant kingdom for high levels of this valuable phytochemical precursor to the pharmaceutical oseltamivir. A simple, rapid, and efficient method using microwave-assisted extraction (MWAE) with water as the extraction solvent was developed for the determination of shikimic acid in plant tissues. High performance liquid chromatography was used for the separation of shikimic acid, and chromatographic data were acquired using photodiode array detection. This MWAE technique was successful in recovering shikimic acid from a series of fortified plant tissues at more than 90% efficiency with an interference-free chromatogram. This allowed the use of lower amounts of reagents and organic solvents, reducing the use of toxic and/or hazardous chemicals, as compared to currently used methodologies. The method was used to determine the level of endogenous shikimic acid in several species of Brachiaria and sugarcane (Saccharum officinarum) and on B. decumbens and soybean (Glycine max) after treatment with glyphosate. The method was sensitive, rapid and reliable in all cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In questa tesi è descritto il lavoro svolto presso un'azienda informatica locale, allo scopo di ricerca ed implementazione di un algoritmo per individuare ed offuscare i volti presenti all'interno di video di e-learning in ambito industriale, al fine di garantire la privacy degli operai presenti. Tale algoritmo sarebbe stato poi da includere in un modulo software da inserire all'interno di un applicazione web già esistente per la gestione di questi video. Si è ricercata una soluzione ad hoc considerando le caratteristiche particolare del problema in questione, studiando le principali tecniche della Computer Vision per comprendere meglio quale strada percorrere. Si è deciso quindi di implementare un algoritmo di Blob Tracking basato sul colore.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a web based expert system application that carries out an initial assessment of the feasibility of a web project. The system allows detection of inconsistency problems before design starts, and suggests correcting actions to solve them. The developed system presents important advantages not only for determining the feasibility of a web project but also by acting as a means of communication between the client company and the web development team, making the requirements specification clearer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors is essential in performing localization. This makes the time of arrival (ToA) an important piece of information to retrieve from the AE signal. Generally, this is determined using statistical methods such as the Akaike Information Criterion (AIC) which is particularly prone to errors in the presence of noise. And given that the structures of interest are surrounded with harsh environments, a way to accurately estimate the arrival time in such noisy scenarios is of particular interest. In this work, two new methods are presented to estimate the arrival times of AE signals which are based on Machine Learning. Inspired by great results in the field, two models are presented which are Deep Learning models - a subset of machine learning. They are based on Convolutional Neural Network (CNN) and Capsule Neural Network (CapsNet). The primary advantage of such models is that they do not require the user to pre-define selected features but only require raw data to be given and the models establish non-linear relationships between the inputs and outputs. The performance of the models is evaluated using AE signals generated by a custom ray-tracing algorithm by propagating them on an aluminium plate and compared to AIC. It was found that the relative error in estimation on the test set was < 5% for the models compared to around 45% of AIC. The testing process was further continued by preparing an experimental setup and acquiring real AE signals to test on. Similar performances were observed where the two models not only outperform AIC by more than a magnitude in their average errors but also they were shown to be a lot more robust as compared to AIC which fails in the presence of noise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis provides a corpus-assisted pragmatic investigation of three Japanese expressions commonly signalled as apologetic, namely gomen, su(m)imasen and mōshiwake arimasen, which can be roughly translated in English with ‘(I’m) sorry’. The analysis is based on a web corpus of 306,670 tokens collected from the Q&A website Yahoo! Chiebukuro, which is examined combining quantitative (statistical) and qualitative (traditional close reading) methods. By adopting a form-to-function approach, the aim of the study is to shed light on three main topics of interest: the pragmatic functions of apology-like expressions, the discursive strategies they co-occur with, and the behaviours that warrant them. The overall findings reveal that apology-like expressions are multifunctional devices whose meanings extend well beyond ‘apology’ alone. These meanings are affected by a number of discursive strategies that can either increase or decrease the perceived (im)politeness level of the speech act to serve interactants’ face needs and communicative goals. The study also identifies a variety of behaviours that people frame as violations, not necessarily because they are actually face-threatening to the receiver, but because doing so is functional to the projection of the apologiser as a moral persona. An additional finding that emerged from the analysis is the pervasiveness of reflexive usages of apology-like expressions, which are often employed metadiscursively to convey, negotiate and challenge opinions on how language should be used. To conclude, the study provides a unique insight into the use of three expressions whose pragmatic meanings are more varied than anticipated. The findings reflect the use of (im)politeness in an online and non-Western context and, hopefully, represent a step towards a more inclusive notion of ‘apologies’ and related speech acts.