993 resultados para Fall-detection, cadute anziani, Android, ADL, Accelerometro, Impatto, Velocità verticale, Postura
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Con il termine Smart Grid si intende una rete urbana capillare che trasporta energia, informazione e controllo, composta da dispositivi e sistemi altamente distribuiti e cooperanti. Essa deve essere in grado di orchestrare in modo intelligente le azioni di tutti gli utenti e dispositivi connessi al fine di distribuire energia in modo sicuro, efficiente e sostenibile. Questo connubio fra ICT ed Energia viene comunemente identificato anche con il termine Smart Metering, o Internet of Energy. La crescente domanda di energia e l’assoluta necessità di ridurre gli impatti ambientali (pacchetto clima energia 20-20-20 [9]), ha creato una convergenza di interessi scientifici, industriali e politici sul tema di come le tecnologie ICT possano abilitare un processo di trasformazione strutturale di ogni fase del ciclo energetico: dalla generazione fino all’accumulo, al trasporto, alla distribuzione, alla vendita e, non ultimo, il consumo intelligente di energia. Tutti i dispositivi connessi, diventeranno parte attiva di un ciclo di controllo esteso alle grandi centrali di generazione così come ai comportamenti dei singoli utenti, agli elettrodomestici di casa, alle auto elettriche e ai sistemi di micro-generazione diffusa. La Smart Grid dovrà quindi appoggiarsi su una rete capillare di comunicazione che fornisca non solo la connettività fra i dispositivi, ma anche l’abilitazione di nuovi servizi energetici a valore aggiunto. In questo scenario, la strategia di comunicazione sviluppata per lo Smart Metering dell’energia elettrica, può essere estesa anche a tutte le applicazioni di telerilevamento e gestione, come nuovi contatori dell’acqua e del gas intelligenti, gestione dei rifiuti, monitoraggio dell’inquinamento dell’aria, monitoraggio del rumore acustico stradale, controllo continuo del sistema di illuminazione pubblico, sistemi di gestione dei parcheggi cittadini, monitoraggio del servizio di noleggio delle biciclette, ecc. Tutto ciò si prevede possa contribuire alla progettazione di un unico sistema connesso, dove differenti dispositivi eterogenei saranno collegati per mettere a disposizione un’adeguata struttura a basso costo e bassa potenza, chiamata Metropolitan Mesh Machine Network (M3N) o ancora meglio Smart City. Le Smart Cities dovranno a loro volta diventare reti attive, in grado di reagire agli eventi esterni e perseguire obiettivi di efficienza in modo autonomo e in tempo reale. Anche per esse è richiesta l’introduzione di smart meter, connessi ad una rete di comunicazione broadband e in grado di gestire un flusso di monitoraggio e controllo bi-direzionale esteso a tutti gli apparati connessi alla rete elettrica (ma anche del gas, acqua, ecc). La M3N, è un’estensione delle wireless mesh network (WMN). Esse rappresentano una tecnologia fortemente attesa che giocherà un ruolo molto importante nelle futura generazione di reti wireless. Una WMN è una rete di telecomunicazione basata su nodi radio in cui ci sono minimo due percorsi che mettono in comunicazione due nodi. E’ un tipo di rete robusta e che offre ridondanza. Quando un nodo non è più attivo, tutti i rimanenti possono ancora comunicare tra di loro, direttamente o passando da uno o più nodi intermedi. Le WMN rappresentano una tipologia di rete fondamentale nel continuo sviluppo delle reti radio che denota la divergenza dalle tradizionali reti wireless basate su un sistema centralizzato come le reti cellulari e le WLAN (Wireless Local Area Network). Analogamente a quanto successo per le reti di telecomunicazione fisse, in cui si è passati, dalla fine degli anni ’60 ai primi anni ’70, ad introdurre schemi di rete distribuite che si sono evolute e man mano preso campo come Internet, le M3N promettono di essere il futuro delle reti wireless “smart”. Il primo vantaggio che una WMN presenta è inerente alla tolleranza alla caduta di nodi della rete stessa. Diversamente da quanto accade per una rete cellulare, in cui la caduta di una Base Station significa la perdita di servizio per una vasta area geografica, le WMN sono provviste di un’alta tolleranza alle cadute, anche quando i nodi a cadere sono più di uno. L'obbiettivo di questa tesi è quello di valutare le prestazioni, in termini di connettività e throughput, di una M3N al variare di alcuni parametri, quali l’architettura di rete, le tecnologie utilizzabili (quindi al variare della potenza, frequenza, Building Penetration Loss…ecc) e per diverse condizioni di connettività (cioè per diversi casi di propagazione e densità abitativa). Attraverso l’uso di Matlab, è stato quindi progettato e sviluppato un simulatore, che riproduce le caratteristiche di una generica M3N e funge da strumento di valutazione delle performance della stessa. Il lavoro è stato svolto presso i laboratori del DEIS di Villa Grifone in collaborazione con la FUB (Fondazione Ugo Bordoni).
Resumo:
Marking the final explosive burning stage of massive stars, supernovae are onernthe of most energetic celestial events. Apart from their enormous optical brightnessrnthey are also known to be associated with strong emission of MeV neutrinos—up tornnow the only proven source of extrasolar neutrinos.rnAlthough being designed for the detection of high energy neutrinos, the recentlyrncompleted IceCube neutrino telescope in the antarctic ice will have the highestrnsensitivity of all current experiments to measure the shape of the neutrino lightrncurve, which is in the MeV range. This measurement is crucial for the understandingrnof supernova dynamics.rnIn this thesis, the development of a Monte Carlo simulation for a future low energyrnextension of IceCube, called PINGU, is described that investigates the response ofrnPINGU to a supernova. Using this simulation, various detector configurations arernanalysed and optimised for supernova detection. The prospects of extracting notrnonly the total light curve, but also the direction of the supernova and the meanrnneutrino energy from the data are discussed. Finally the performance of PINGU isrncompared to the current capabilities of IceCube.
Resumo:
Il rischio di caduta durante la deambulazione aumenta con l'età; le cadute comportano costi elevati per l’assistenza sanitaria ed hanno un influenza critica sullo stato di salute dell'anziano. La valutazione della stabilità del cammino rappresenta un aspetto fondamentale per la definizione di un indice del rischio di caduta. Diverse misure di variabilità e stabilità del cammino sono state proposte in letteratura con l'obiettivo di ottenere una standardizzazione metodologica e risultati affidabili, ripetibili e facilmente interpretabili. Queste misure andrebbero ad integrare, o sostituire, le scale cliniche, unico metodo attualmente in uso ma la cui affidabilità è limitata in quanto fortemente dipendente dall’operatore. Il fine ultimo è quello di definire un indice del rischio di caduta specifico per il soggetto. Gli obiettivi della tesi comprendono la valutazione della ripetibilità in dipendenza dalla lunghezza del trial di una serie di misure di variabilità e stabilità applicate al cammino di soggetti anziani, la valutazione della correlazione fra queste misure di variabilità e stabilità e il risultato delle scale cliniche e la valutazione della correlazione fra queste misure di variabilità e stabilità e la storia di caduta dei soggetti. Per fare questo si sono somministrati alcuni questionari e schede di valutazione funzionale (CIRS, Barthel, Mini-BESTest) a 70 soggetti di età superiore a 65 anni. Questi hanno eseguito inoltre una camminata lungo un tratto rettilineo di circa 100 m di lunghezza; il relativo segnale accelerometrico è stato registrato mediante sensori inerziali. Il risultato generale ottenuto è che alcune misure di stabilità come la MSE e la RQA, già in passato risultate legate alla storia di cadute, mostrano promettenti performance per la definizione del rischio di caduta.
Resumo:
La tecnologia rende ormai disponibili, per ogni tipologia di azienda, dispositivi elettronici a costo contenuto in grado di rilevare il comportamento dei propri clienti e quindi permettere di profilare bacini di utenza in base a comportamenti comuni. Le compagnie assicurative automobilistiche sono molto sensibili a questo tema poiché, per essere concorrenziali, il premio assicurativo che ogni cliente paga deve essere abbastanza basso da attirarlo verso la compagnia ma anche abbastanza alto da non diventare una voce negativa nel bilancio di quest'ultima. Negli ultimi anni vediamo dunque il diffondersi delle scatole nere, che altro non sono che dispositivi satellitari che aiutano le compagnie assicurative a definire più nel dettaglio il profilo del proprio cliente. Una migliore profilatura porta vantaggi sia all'assicurato che alla compagnia assicurativa, perché da un lato il cliente vede il premio dell'assicurazione abbassarsi, dall'altro la compagnia assicuratrice si sente rassicurata sull'affidabilità del cliente. Questi dispositivi sono costruiti per svolgere principalmente due compiti: raccogliere dati sulla guida dell'automobilista per una profilatura e rilevare un incidente. Come servizio aggiuntivo può essere richiesto supporto al primo soccorso. L'idea di questa tesi è quella di sviluppare un'applicazione per smartphone che funzioni da scatola nera e analizzare i vantaggi e i limiti dell'hardware. Prima di tutto l'applicazione potrà essere installata su diversi dispositivi e potrà essere aggiornata di anno in anno senza dover eliminare l'hardware esistente, d'altro canto lo smartphone permette di poter associare anche funzionalità di tipo client-care più avanzate. Queste funzionalità sarebbero di notevole interesse per l'automobilista, pensiamo ad esempio al momento dell'incidente, sarebbe impagabile un'applicazione che, rilevato l'incidente in cui si è rimasti coinvolti, avverta subito i soccorsi indicando esplicitamente l'indirizzo in cui è avvenuto.
Resumo:
L'obiettivo di questa tesi è quello di fornire le informazioni di base che, un aspirante programmatore Android, deve sapere per scrivere applicazioni che facciano uso dei sensori presenti nei moderni telefoni cellulari (accelerometro, giroscopio, sensore di prossimità, ecc...). La tesi si apre citando qualche aneddoto storico sulla nascita del sistema operativo più famoso al mondo ed elencando tutte le releases ufficiali e le novità che hanno portato dalla 1.0 all'attuale 5.1.1 Lollipop. Verranno analizzate le componenti fondamentali per costruire un applicazione Android: Activities, Services, Content Providers e Broadcast Receivers. Verrà introdotto e approfondito il concetto di sensore, sia punto di vista fisico sia dal punto di vista informatico/logico, evidenziando le tre dimensioni più importanti ovvero struttura, interazione e comportamento. Si analizzeranno tutte i tipi di errori e problematiche reali che potrebbero influire negativamente sui valori delle misurazioni (disturbi, rumori, ecc...) e si propone la moderna soluzione del Sensor Fusion come caso particolare di studio, prendendo spunto dal lavoro di grandi aziende come la Invensense e la Kionix Inc. Infine, si conclude l'elaborato passando dalle parole al codice: verranno affrontate le fasi di analisi e d'implementazione di un'applicazione esemplificativa capace di determinare l'orientamento del dispositivo nello spazio, sfruttando diverse tecniche Sensor Fusion.
Resumo:
L'elaborato tratta del progetto di tesi "FastLApp". "FastLApp" e un'applicazione per la piattaforma Android che si pone come obbiettivo l'acquisizione dei dati di telemetria relativi al comportamento di un motoveicolo su strada e su circuito e come target i motociclisti amatoriali. Dopo un'introduzione sul sistema operativo Android, sulle principali tecniche di telemetria e di acquisizione dati e sulle applicazioni correlate, vengono descritte le funzionalita, l'implementazione e il testing del applicazione oggetto di tesi. Dopo una breve descrizione degli strumenti e delle tecnologie utilizzate, viene infine data una valutazione all'applicazione e discussi i suoi eventuali sviluppi futuri.
Resumo:
Enzootic pneumonia (EP) of pigs, caused by Mycoplasma hyopneumoniae has been a notifiable disease in Switzerland since May 2003. The diagnosis of EP has been based on multiple methods, including clinical, bacteriological and epidemiological findings as well as pathological examination of lungs (mosaic diagnosis). With the recent development of a real-time PCR (rtPCR) assay with 2 target sequences a new detection method for M. hyopneumoniae became available. This assay was tested for its applicability to nasal swab material from live animals. Pigs from 74 herds (average 10 pigs per herd) were tested. Using the mosaic diagnosis, 22 herds were classified as EP positive and 52 as EP negative. From the 730 collected swab samples we were able to demonstrate that the rtPCR test was 100% specific. In cases of cough the sensitivity on herd level of the rtPCR is 100%. On single animal level and in herds without cough the sensitivity was lower. In such cases, only a positive result would be proof for an infection with M. hyopneumoniae. Our study shows that the rtPCR on nasal swabs from live pigs allows a fast and accurate diagnosis in cases of suspected EP.
Resumo:
Metabolic Syndrome (MetS) is a clustering of cardiovascular (CV) risk factors that includes obesity, dyslipidemia, hyperglycemia, and elevated blood pressure. Applying the criteria for MetS can serve as a clinically feasible tool for identifying patients at high risk for CV morbidity and mortality, particularly those who do not fall into traditional risk categories. The objective of this study was to examine the association between MetS and CV mortality among 10,940 American hypertensive adults, ages 30-69 years, participating in a large randomized controlled trial of hypertension treatment (HDFP 1973-1983). MetS was defined as the presence of hypertension and at least two of the following risk factors: obesity, dyslipidemia, or hyperglycemia. Of the 10,763 individuals with sufficient data available for analysis, 33.2% met criteria for MetS at baseline. The baseline prevalence of MetS was significantly higher among women (46%) than men (22%) and among non-blacks (37%) versus blacks (30%). All-cause and CV mortality was assessed for 10,763 individuals. Over a median follow-up of 7.8 years, 1,425 deaths were observed. Approximately 53% of these deaths were attributed to CV causes. Compared to individuals without MetS at baseline, those with MetS had higher rates of all-cause mortality (14.5% v. 12.6%) and CV mortality (8.2% versus 6.4%). The unadjusted risk of CV mortality among those with MetS was 1.31 (95% confidence interval [CI], 1.12-1.52) times that for those without MetS at baseline. After multiple adjustment for traditional risk factors of age, race, gender, history of cardiovascular disease (CVD), and smoking status, individuals with MetS, compared to those without MetS, were 1.42 (95% CI, 1.20-1.67) times more likely to die of CV causes. Of the individual components of MetS, hyperglycemia/diabetes conferred the strongest risk of CV mortality (OR 1.73; 95% CI, 1.39-2.15). Results of the present study suggest MetS defined as the presence of hypertension and 2 additional cardiometabolic risk factors (obesity, dyslipidemia, or hyperglycemia/diabetes) can be used with some success to predict CV mortality in middle-aged hypertensive adults. Ongoing and future prospective studies are vital to examine the association between MetS and cardiovascular morbidity and mortality in select high-risk subpopulations, and to continue evaluating the public health impact of aggressive, targeted screening, prevention, and treatment efforts to prevent future cardiovascular disability and death.^
Resumo:
Los sensores inerciales (acelerómetros y giróscopos) se han ido introduciendo poco a poco en dispositivos que usamos en nuestra vida diaria gracias a su minituarización. Hoy en día todos los smartphones contienen como mínimo un acelerómetro y un magnetómetro, siendo complementados en losmás modernos por giróscopos y barómetros. Esto, unido a la proliferación de los smartphones ha hecho viable el diseño de sistemas basados en las medidas de sensores que el usuario lleva colocados en alguna parte del cuerpo (que en un futuro estarán contenidos en tejidos inteligentes) o los integrados en su móvil. El papel de estos sensores se ha convertido en fundamental para el desarrollo de aplicaciones contextuales y de inteligencia ambiental. Algunos ejemplos son el control de los ejercicios de rehabilitación o la oferta de información referente al sitio turístico que se está visitando. El trabajo de esta tesis contribuye a explorar las posibilidades que ofrecen los sensores inerciales para el apoyo a la detección de actividad y la mejora de la precisión de servicios de localización para peatones. En lo referente al reconocimiento de la actividad que desarrolla un usuario, se ha explorado el uso de los sensores integrados en los dispositivos móviles de última generación (luz y proximidad, acelerómetro, giróscopo y magnetómetro). Las actividades objetivo son conocidas como ‘atómicas’ (andar a distintas velocidades, estar de pie, correr, estar sentado), esto es, actividades que constituyen unidades de actividades más complejas como pueden ser lavar los platos o ir al trabajo. De este modo, se usan algoritmos de clasificación sencillos que puedan ser integrados en un móvil como el Naïve Bayes, Tablas y Árboles de Decisión. Además, se pretende igualmente detectar la posición en la que el usuario lleva el móvil, no sólo con el objetivo de utilizar esa información para elegir un clasificador entrenado sólo con datos recogidos en la posición correspondiente (estrategia que mejora los resultados de estimación de la actividad), sino también para la generación de un evento que puede producir la ejecución de una acción. Finalmente, el trabajo incluye un análisis de las prestaciones de la clasificación variando el tipo de parámetros y el número de sensores usados y teniendo en cuenta no sólo la precisión de la clasificación sino también la carga computacional. Por otra parte, se ha propuesto un algoritmo basado en la cuenta de pasos utilizando informaiii ción proveniente de un acelerómetro colocado en el pie del usuario. El objetivo final es detectar la actividad que el usuario está haciendo junto con la estimación aproximada de la distancia recorrida. El algoritmo de cuenta pasos se basa en la detección de máximos y mínimos usando ventanas temporales y umbrales sin requerir información específica del usuario. El ámbito de seguimiento de peatones en interiores es interesante por la falta de un estándar de localización en este tipo de entornos. Se ha diseñado un filtro extendido de Kalman centralizado y ligeramente acoplado para fusionar la información medida por un acelerómetro colocado en el pie del usuario con medidas de posición. Se han aplicado también diferentes técnicas de corrección de errores como las de velocidad cero que se basan en la detección de los instantes en los que el pie está apoyado en el suelo. Los resultados han sido obtenidos en entornos interiores usando las posiciones estimadas por un sistema de triangulación basado en la medida de la potencia recibida (RSS) y GPS en exteriores. Finalmente, se han implementado algunas aplicaciones que prueban la utilidad del trabajo desarrollado. En primer lugar se ha considerado una aplicación de monitorización de actividad que proporciona al usuario información sobre el nivel de actividad que realiza durante un período de tiempo. El objetivo final es favorecer el cambio de comportamientos sedentarios, consiguiendo hábitos saludables. Se han desarrollado dos versiones de esta aplicación. En el primer caso se ha integrado el algoritmo de cuenta pasos en una plataforma OSGi móvil adquiriendo los datos de un acelerómetro Bluetooth colocado en el pie. En el segundo caso se ha creado la misma aplicación utilizando las implementaciones de los clasificadores en un dispositivo Android. Por otro lado, se ha planteado el diseño de una aplicación para la creación automática de un diario de viaje a partir de la detección de eventos importantes. Esta aplicación toma como entrada la información procedente de la estimación de actividad y de localización además de información almacenada en bases de datos abiertas (fotos, información sobre sitios) e información sobre sensores reales y virtuales (agenda, cámara, etc.) del móvil. Abstract Inertial sensors (accelerometers and gyroscopes) have been gradually embedded in the devices that people use in their daily lives thanks to their miniaturization. Nowadays all smartphones have at least one embedded magnetometer and accelerometer, containing the most upto- date ones gyroscopes and barometers. This issue, together with the fact that the penetration of smartphones is growing steadily, has made possible the design of systems that rely on the information gathered by wearable sensors (in the future contained in smart textiles) or inertial sensors embedded in a smartphone. The role of these sensors has become key to the development of context-aware and ambient intelligent applications. Some examples are the performance of rehabilitation exercises, the provision of information related to the place that the user is visiting or the interaction with objects by gesture recognition. The work of this thesis contributes to explore to which extent this kind of sensors can be useful to support activity recognition and pedestrian tracking, which have been proven to be essential for these applications. Regarding the recognition of the activity that a user performs, the use of sensors embedded in a smartphone (proximity and light sensors, gyroscopes, magnetometers and accelerometers) has been explored. The activities that are detected belong to the group of the ones known as ‘atomic’ activities (e.g. walking at different paces, running, standing), that is, activities or movements that are part of more complex activities such as doing the dishes or commuting. Simple, wellknown classifiers that can run embedded in a smartphone have been tested, such as Naïve Bayes, Decision Tables and Trees. In addition to this, another aim is to estimate the on-body position in which the user is carrying the mobile phone. The objective is not only to choose a classifier that has been trained with the corresponding data in order to enhance the classification but also to start actions. Finally, the performance of the different classifiers is analysed, taking into consideration different features and number of sensors. The computational and memory load of the classifiers is also measured. On the other hand, an algorithm based on step counting has been proposed. The acceleration information is provided by an accelerometer placed on the foot. The aim is to detect the activity that the user is performing together with the estimation of the distance covered. The step counting strategy is based on detecting minima and its corresponding maxima. Although the counting strategy is not innovative (it includes time windows and amplitude thresholds to prevent under or overestimation) no user-specific information is required. The field of pedestrian tracking is crucial due to the lack of a localization standard for this kind of environments. A loosely-coupled centralized Extended Kalman Filter has been proposed to perform the fusion of inertial and position measurements. Zero velocity updates have been applied whenever the foot is detected to be placed on the ground. The results have been obtained in indoor environments using a triangulation algorithm based on RSS measurements and GPS outdoors. Finally, some applications have been designed to test the usefulness of the work. The first one is called the ‘Activity Monitor’ whose aim is to prevent sedentary behaviours and to modify habits to achieve desired objectives of activity level. Two different versions of the application have been implemented. The first one uses the activity estimation based on the step counting algorithm, which has been integrated in an OSGi mobile framework acquiring the data from a Bluetooth accelerometer placed on the foot of the individual. The second one uses activity classifiers embedded in an Android smartphone. On the other hand, the design of a ‘Travel Logbook’ has been planned. The input of this application is the information provided by the activity and localization modules, external databases (e.g. pictures, points of interest, weather) and mobile embedded and virtual sensors (agenda, camera, etc.). The aim is to detect important events in the journey and gather the information necessary to store it as a journal page.
Resumo:
With the ever growing trend of smart phones and tablets, Android is becoming more and more popular everyday. With more than one billion active users i to date, Android is the leading technology in smart phone arena. In addition to that, Android also runs on Android TV, Android smart watches and cars. Therefore, in recent years, Android applications have become one of the major development sectors in software industry. As of mid 2013, the number of published applications on Google Play had exceeded one million and the cumulative number of downloads was more than 50 billionii. A 2013 survey also revealed that 71% of the mobile application developers work on developing Android applicationsiii. Considering this size of Android applications, it is quite evident that people rely on these applications on a daily basis for the completion of simple tasks like keeping track of weather to rather complex tasks like managing one’s bank accounts. Hence, like every other kind of code, Android code also needs to be verified in order to work properly and achieve a certain confidence level. Because of the gigantic size of the number of applications, it becomes really hard to manually test Android applications specially when it has to be verified for various versions of the OS and also, various device configurations such as different screen sizes and different hardware availability. Hence, recently there has been a lot of work on developing different testing methods for Android applications in Computer Science fraternity. The model of Android attracts researchers because of its open source nature. It makes the whole research model more streamlined when the code for both, application and the platform are readily available to analyze. And hence, there has been a great deal of research in testing and static analysis of Android applications. A great deal of this research has been focused on the input test generation for Android applications. Hence, there are a several testing tools available now, which focus on automatic generation of test cases for Android applications. These tools differ with one another on the basis of their strategies and heuristics used for this generation of test cases. But there is still very little work done on the comparison of these testing tools and the strategies they use. Recently, some research work has been carried outiv in this regard that compared the performance of various available tools with respect to their respective code coverage, fault detection, ability to work on multiple platforms and their ease of use. It was done, by running these tools on a total of 60 real world Android applications. The results of this research showed that although effective, these strategies being used by the tools, also face limitations and hence, have room for improvement. The purpose of this thesis is to extend this research into a more specific and attribute-‐ oriented way. Attributes refer to the tasks that can be completed using the Android platform. It can be anything ranging from a basic system call for receiving an SMS to more complex tasks like sending the user to another application from the current one. The idea is to develop a benchmark for Android testing tools, which is based on the performance related to these attributes. This will allow the comparison of these tools with respect to these attributes. For example, if there is an application that plays some audio file, will the testing tool be able to generate a test input that will warrant the execution of this audio file? Using multiple applications using different attributes, it can be visualized that which testing tool is more useful for which kinds of attributes. In this thesis, it was decided that 9 attributes covering the basic nature of tasks, will be targeted for the assessment of three testing tools. Later this can be done for much more attributes to compare even more testing tools. The aim of this work is to show that this approach is effective and can be used on a much larger scale. One of the flagship features of this work, which also differentiates it with the previous work, is that the applications used, are all specially made for this research. The reason for doing that is to analyze just that specific attribute in isolation, which the application is focused on, and not allow the tool to get bottlenecked by something trivial, which is not the main attribute under testing. This means 9 applications, each focused on one specific attribute. The main contributions of this thesis are: A summary of the three existing testing tools and their respective techniques for automatic test input generation of Android Applications. • A detailed study of the usage of these testing tools using the 9 applications specially designed and developed for this study. • The analysis of the obtained results of the study carried out. And a comparison of the performance of the selected tools.
Resumo:
Mode of access: Internet.
Resumo:
The initial image-processing stages of visual cortex are well suited to a local (patchwise) analysis of the viewed scene. But the world's structures extend over space as textures and surfaces, suggesting the need for spatial integration. Most models of contrast vision fall shy of this process because (i) the weak area summation at detection threshold is attributed to probability summation (PS) and (ii) there is little or no advantage of area well above threshold. Both of these views are challenged here. First, it is shown that results at threshold are consistent with linear summation of contrast following retinal inhomogeneity, spatial filtering, nonlinear contrast transduction and multiple sources of additive Gaussian noise. We suggest that the suprathreshold loss of the area advantage in previous studies is due to a concomitant increase in suppression from the pedestal. To overcome this confound, a novel stimulus class is designed where: (i) the observer operates on a constant retinal area, (ii) the target area is controlled within this summation field, and (iii) the pedestal is fixed in size. Using this arrangement, substantial summation is found along the entire masking function, including the region of facilitation. Our analysis shows that PS and uncertainty cannot account for the results, and that suprathreshold summation of contrast extends over at least seven target cycles of grating. © 2007 The Royal Society.
Resumo:
With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.
Resumo:
With the rapid growth of the Internet, computer attacks are increasing at a fast pace and can easily cause millions of dollar in damage to an organization. Detecting these attacks is an important issue of computer security. There are many types of attacks and they fall into four main categories, Denial of Service (DoS) attacks, Probe, User to Root (U2R) attacks, and Remote to Local (R2L) attacks. Within these categories, DoS and Probe attacks continuously show up with greater frequency in a short period of time when they attack systems. They are different from the normal traffic data and can be easily separated from normal activities. On the contrary, U2R and R2L attacks are embedded in the data portions of the packets and normally involve only a single connection. It becomes difficult to achieve satisfactory detection accuracy for detecting these two attacks. Therefore, we focus on studying the ambiguity problem between normal activities and U2R/R2L attacks. The goal is to build a detection system that can accurately and quickly detect these two attacks. In this dissertation, we design a two-phase intrusion detection approach. In the first phase, a correlation-based feature selection algorithm is proposed to advance the speed of detection. Features with poor prediction ability for the signatures of attacks and features inter-correlated with one or more other features are considered redundant. Such features are removed and only indispensable information about the original feature space remains. In the second phase, we develop an ensemble intrusion detection system to achieve accurate detection performance. The proposed method includes multiple feature selecting intrusion detectors and a data mining intrusion detector. The former ones consist of a set of detectors, and each of them uses a fuzzy clustering technique and belief theory to solve the ambiguity problem. The latter one applies data mining technique to automatically extract computer users’ normal behavior from training network traffic data. The final decision is a combination of the outputs of feature selecting and data mining detectors. The experimental results indicate that our ensemble approach not only significantly reduces the detection time but also effectively detect U2R and R2L attacks that contain degrees of ambiguous information.