971 resultados para online damage detection


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phospholipid oxidation by adventitious damage generates a wide variety of products with potentially novel biological activities that can modulate inflammatory processes associated with various diseases. To understand the biological importance of oxidised phospholipids (OxPL) and their potential role as disease biomarkers requires precise information about the abundance of these compounds in cells and tissues. There are many chemiluminescence and spectrophotometric assays available for detecting oxidised phospholipids, but they all have some limitations. Mass spectrometry coupled with liquid chromatography is a powerful and sensitive approach that can provide detailed information about the oxidative lipidome, but challenges still remain. The aim of this work is to develop improved methods for detection of OxPLs by optimisation of chromatographic separation through testing several reverse phase columns and solvent systems, and using targeted mass spectrometry approaches. Initial experiments were carried out using oxidation products generated in vitro to optimise the chromatography separation parameters and mass spectrometry parameters. We have evaluated the chromatographic separation of oxidised phosphatidylcholines (OxPCs) and oxidised phosphatidylethanolamines (OXPEs) using C8, C18 and C30 reverse phase, polystyrene – divinylbenzene based monolithic and mixed – mode hydrophilic interaction (HILIC) columns, interfaced with mass spectrometry. Our results suggest that the monolithic column was best able to separate short chain OxPCs and OxPEs from long chain oxidised and native PCs and PEs. However, variation in charge of polar head groups and extreme diversity of oxidised species make analysis of several classes of OxPLs within one analytical run impractical. We evaluated and optimised the chromatographic separation of OxPLs by serially coupling two columns: HILIC and monolith column that provided us the larger coverage of OxPL species in a single analytical run.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fast spreading unknown viruses have caused major damage on computer systems upon their initial release. Current detection methods have lacked capabilities to detect unknown viruses quickly enough to avoid mass spreading and damage. This dissertation has presented a behavior based approach to detecting known and unknown viruses based on their attempt to replicate. Replication is the qualifying fundamental characteristic of a virus and is consistently present in all viruses making this approach applicable to viruses belonging to many classes and executing under several conditions. A form of replication called self-reference replication, (SR-replication), has been formalized as one main type of replication which specifically replicates by modifying or creating other files on a system to include the virus itself. This replication type was used to detect viruses attempting replication by referencing themselves which is a necessary step to successfully replicate files. The approach does not require a priori knowledge about known viruses. Detection was accomplished at runtime by monitoring currently executing processes attempting to replicate. Two implementation prototypes of the detection approach called SRRAT were created and tested on the Microsoft Windows operating systems focusing on the tracking of user mode Win32 API system calls and Kernel mode system services. The research results showed SR-replication capable of distinguishing between file infecting viruses and benign processes with little or no false positives and false negatives. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Sucralose has gained popularity as a low calorie artificial sweetener worldwide. Due to its high stability and persistence, sucralose has shown widespread occurrence in environmental waters, at concentrations that could reach up to several μg/L. Previous studies have used time consuming sample preparation methods (offline solid phase extraction/derivatization) or methods with rather high detection limits (direct injection) for sucralose analysis. This study described a faster and sensitive analytical method for the determination of sucralose in environmental samples. Results An online SPE-LC–MS/MS method was developed, being capable to quantify sucralose in 12 minutes using only 10 mL of sample, with method detection limits (MDLs) of 4.5 ng/L, 8.5 ng/L and 45 ng/L for deionized water, drinking and reclaimed waters (1:10 diluted with deionized water), respectively. Sucralose was detected in 82% of the reclaimed water samples at concentrations reaching up to 18 μg/L. The monthly average for a period of one year was 9.1 ± 2.9 μg/L. The calculated mass loads per capita of sucralose discharged through WWTP effluents based on the concentrations detected in wastewaters in the U. S. is 5.0 mg/day/person. As expected, the concentrations observed in drinking water were much lower but still relevant reaching as high as 465 ng/L. In order to evaluate the stability of sucralose, photodegradation experiments were performed in natural waters. Significant photodegradation of sucralose was observed only in freshwater at 254 nm. Minimal degradation (<20%) was observed for all matrices under more natural conditions (350 nm or solar simulator). The only photolysis product of sucralose identified by high resolution mass spectrometry was a de-chlorinated molecule at m/z 362.0535, with molecular formula C12H20Cl2O8. Conclusions Online SPE LC-APCI/MS/MS developed in the study was applied to more than 100 environmental samples. Sucralose was frequently detected (>80%) indicating that the conventional treatment process employed in the sewage treatment plants is not efficient for its removal. Detection of sucralose in drinking waters suggests potential contamination of surface and ground waters sources with anthropogenic wastewater streams. Its high resistance to photodegradation, minimal sorption and high solubility indicate that sucralose could be a good tracer of anthropogenic wastewater intrusion into the environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fast spreading unknown viruses have caused major damage on computer systems upon their initial release. Current detection methods have lacked capabilities to detect unknown virus quickly enough to avoid mass spreading and damage. This dissertation has presented a behavior based approach to detecting known and unknown viruses based on their attempt to replicate. Replication is the qualifying fundamental characteristic of a virus and is consistently present in all viruses making this approach applicable to viruses belonging to many classes and executing under several conditions. A form of replication called self-reference replication, (SR-replication), has been formalized as one main type of replication which specifically replicates by modifying or creating other files on a system to include the virus itself. This replication type was used to detect viruses attempting replication by referencing themselves which is a necessary step to successfully replicate files. The approach does not require a priori knowledge about known viruses. Detection was accomplished at runtime by monitoring currently executing processes attempting to replicate. Two implementation prototypes of the detection approach called SRRAT were created and tested on the Microsoft Windows operating systems focusing on the tracking of user mode Win32 API system calls and Kernel mode system services. The research results showed SR-replication capable of distinguishing between file infecting viruses and benign processes with little or no false positives and false negatives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study was funded by the Association of Anaesthetists of Great Britain and Ireland, the British Journal of Anaesthesia/Royal College of Anaesthetists (PhD studentship award to BM) and the Melville Trust.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional reliability models for parallel systems are not applicable for the analysis of parallel systems with load transfer and sharing. In this short communication, firstly, the dependent failures of parallel systems are analyzed, and the reliability model of load-sharing parallel system is presented based on Miner cumulative damage theory and the full probability formula. Secondly, the parallel system reliability is calculated by Monte Carlo simulation when the component life follows the Weibull distribution. The research result shows that the proposed reliability mathematical model could analyze and evaluate the reliability of parallel systems in the presence of load transfer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With applications ranging from aerospace to biomedicine, additive manufacturing (AM) has been revolutionizing the manufacturing industry. The ability of additive techniques, such as selective laser melting (SLM), to create fully functional, geometrically complex, and unique parts out of high strength materials is of great interest. Unfortunately, despite numerous advantages afforded by this technology, its widespread adoption is hindered by a lack of on-line, real time feedback control and quality assurance techniques. In this thesis, inline coherent imaging (ICI), a broadband, spatially coherent imaging technique, is used to observe the SLM process in 15 - 45 $\mu m$ 316L stainless steel. Imaging of both single and multilayer builds is performed at a rate of 200 $kHz$, with a resolution of tens of microns, and a high dynamic range rendering it impervious to blinding from the process beam. This allows imaging before, during, and after laser processing to observe changes in the morphology and stability of the melt. Galvanometer-based scanning of the imaging beam relative to the process beam during the creation of single tracks is used to gain a unique perspective of the SLM process that has been so far unobservable by other monitoring techniques. Single track processing is also used to investigate the possibility of a preliminary feedback control parameter based on the process beam power, through imaging with both coaxial and 100 $\mu m$ offset alignment with respect to the process beam. The 100 $\mu m$ offset improved imaging by increasing the number of bright A-lines (i.e. with signal greater than the 10 $dB$ noise floor) by 300\%. The overlap between adjacent tracks in a single layer is imaged to detect characteristic fault signatures. Full multilayer builds are carried out and the resultant ICI images are used to detect defects in the finished part and improve upon the initial design of the build system. Damage to the recoater blade is assessed using powder layer scans acquired during a 3D build. The ability of ICI to monitor SLM processes at such high rates with high resolution offers extraordinary potential for future advances in on-line feedback control of additive manufacturing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN]Automatic detection systems do not perform as well as human observers, even on simple detection tasks. A potential solution to this problem is training vision systems on appropriate regions of interests (ROIs), in contrast to training on predefined and arbitrarily selected regions. Here we focus on detecting pedestrians in static scenes. Our aim is to answer the following question: Can automatic vision systems for pedestrian detection be improved by training them on perceptually-defined ROIs?

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a study that was undertaken to examine human interaction with a pedagogical agent and the passive and active detection of such agents within a synchronous, online environment. A pedagogical agent is a software application which can provide a human like interaction using a natural language interface. These may be familiar from the smartphone interfaces such as ‘Siri’ or ‘Cortana’, or the virtual online assistants found on some websites, such as ‘Anna’ on the Ikea website. Pedagogical agents are characters on the computer screen with embodied life-like behaviours such as speech, emotions, locomotion, gestures, and movements of the head, the eye, or other parts of the body. The passive detection test is where participants are not primed to the potential presence of a pedagogical agent within the online environment. The active detection test is where participants are primed to the potential presence of a pedagogical agent. The purpose of the study was to examine how people passively detected pedagogical agents that were presenting themselves as humans in an online environment. In order to locate the pedagogical agent in a realistic higher education online environment, problem-based learning online was used. Problem-based learning online provides a focus for discussions and participation, without creating too much artificiality. The findings indicated that the ways in which students positioned the agent tended to influence the interaction between them. One of the key findings was that since the agent was focussed mainly on the pedagogical task this may have hampered interaction with the students, however some of its non-task dialogue did improve students' perceptions of the autonomous agents’ ability to interact with them. It is suggested that future studies explore the differences between the relationships and interactions of learner and pedagogical agent within authentic situations, in order to understand if students' interactions are different between real and virtual mentors in an online setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recent trend on embedded system development opens a new prospect for applications that in the past were not possible. The eye tracking for sleep and fatigue detection has become an important and useful application in industrial and automotive scenarios since fatigue is one of the most prevalent causes of earth-moving equipment accidents. Typical applications such as cameras, accelerometers and dermal analyzers are present on the market but have some inconvenient. This thesis project has used EEG signal, particularly, alpha waves, to overcome them by using an embedded software-hardware implementation to detect these signals in real time