891 resultados para rule-based logic
Resumo:
Context: Mobile applications support a set of user-interaction features that are independent of the application logic. Rotating the device, scrolling, or zooming are examples of such features. Some bugs in mobile applications can be attributed to user-interaction features. Objective: This paper proposes and evaluates a bug analyzer based on user-interaction features that uses digital image processing to find bugs. Method: Our bug analyzer detects bugs by comparing the similarity between images taken before and after a user-interaction. SURF, an interest point detector and descriptor, is used to compare the images. To evaluate the bug analyzer, we conducted a case study with 15 randomly selected mobile applications. First, we identified user-interaction bugs by manually testing the applications. Images were captured before and after applying each user-interaction feature. Then, image pairs were processed with SURF to obtain interest points, from which a similarity percentage was computed, to finally decide whether there was a bug. Results: We performed a total of 49 user-interaction feature tests. When manually testing the applications, 17 bugs were found, whereas when using image processing, 15 bugs were detected. Conclusions: 8 out of 15 mobile applications tested had bugs associated to user-interaction features. Our bug analyzer based on image processing was able to detect 88% (15 out of 17) of the user-interaction bugs found with manual testing.
Resumo:
Melanoma is a type of skin cancer and is caused by the uncontrolled growth of atypical melanocytes. In recent decades, computer aided diagnosis is used to support medical professionals; however, there is still no globally accepted tool. In this context, similar to state-of-the-art we propose a system that receives a dermatoscopy image and provides a diagnostic if the lesion is benign or malignant. This tool is composed with next modules: Preprocessing, Segmentation, Feature Extraction, and Classification. Preprocessing involves the removal of hairs. Segmentation is to isolate the lesion. Feature extraction is considering the ABCD dermoscopy rule. The classification is performed by the Support Vector Machine. Experimental evidence indicates that the proposal has 90.63 % accuracy, 95 % sensitivity, and 83.33 % specificity on a data-set of 104 dermatoscopy images. These results are favorable considering the performance of diagnosis by traditional progress in the area of dermatology
Resumo:
International audience
Resumo:
International audience
Resumo:
Intrusion Detection Systems (IDSs) provide an important layer of security for computer systems and networks, and are becoming more and more necessary as reliance on Internet services increases and systems with sensitive data are more commonly open to Internet access. An IDS’s responsibility is to detect suspicious or unacceptable system and network activity and to alert a systems administrator to this activity. The majority of IDSs use a set of signatures that define what suspicious traffic is, and Snort is one popular and actively developing open-source IDS that uses such a set of signatures known as Snort rules. Our aim is to identify a way in which Snort could be developed further by generalising rules to identify novel attacks. In particular, we attempted to relax and vary the conditions and parameters of current Snort rules, using a similar approach to classic rule learning operators such as generalisation and specialisation. We demonstrate the effectiveness of our approach through experiments with standard datasets and show that we are able to detect previously undetected variants of various attacks. We conclude by discussing the general effectiveness and appropriateness of generalisation in Snort based IDS rule processing. Keywords: anomaly detection, intrusion detection, Snort, Snort rules
Resumo:
Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.
Resumo:
Intrusion Detection Systems (IDSs) provide an important layer of security for computer systems and networks, and are becoming more and more necessary as reliance on Internet services increases and systems with sensitive data are more commonly open to Internet access. An IDS’s responsibility is to detect suspicious or unacceptable system and network activity and to alert a systems administrator to this activity. The majority of IDSs use a set of signatures that define what suspicious traffic is, and Snort is one popular and actively developing open-source IDS that uses such a set of signatures known as Snort rules. Our aim is to identify a way in which Snort could be developed further by generalising rules to identify novel attacks. In particular, we attempted to relax and vary the conditions and parameters of current Snort rules, using a similar approach to classic rule learning operators such as generalisation and specialisation. We demonstrate the effectiveness of our approach through experiments with standard datasets and show that we are able to detect previously undetected variants of various attacks. We conclude by discussing the general effectiveness and appropriateness of generalisation in Snort based IDS rule processing. Keywords: anomaly detection, intrusion detection, Snort, Snort rules
Resumo:
Cette thèse examine l’interprétation et l’application, par l’Haute Cour d'Israël (HCJ), de principes du droit international de l’occupation et du droit international des droits de la personne dans le traitement de requêtes judiciaires formulées par des justiciables palestiniens. Elle s’intéresse plus particulièrement aux jugements rendus depuis le déclenchement de la deuxième Intifada (2000) suite à des requêtes mettant en cause la légalité des mesures adoptées par les autorités israéliennes au nom d’un besoin prétendu d’accroitre la sécurité des colonies et des colons israéliens dans le territoire occupé de la Cisjordanie. La première question sous étude concerne la mesure dans laquelle la Cour offre un recours effectif aux demandeurs palestiniens face aux violations alléguées de leurs droits internationaux par l’occupant. La recherche fait sienne la position de la HJC selon laquelle le droit de l’occupation est guidé par une logique interne tenant compte de la balance des intérêts en cause, en l’occurrence le besoin de sécurité de l’occupant, d’une part, et les droits fondamentaux de l’occupé, d’autre part. Elle considère, en outre, que cette logique se voit reflétée dans les principes normatifs constituant la base de ce corpus juridique, soit que l’occupation est par sa nature temporaire, que de l’occupation découle un rapport de fiduciaire et, finalement, que l’occupant n’acquiert point de souveraineté sur le territoire. Ainsi, la deuxième question qui est posée est de savoir si l’interprétation du droit par la Cour (HCJ) a eu pour effet de promouvoir ces principes normatifs ou, au contraire, de leur porter préjudice. La réunion de plusieurs facteurs, à savoir la durée prolongée de l’occupation de la Cisjordanie par Israël, la menace accrue à la sécurité depuis 2000 ainsi qu’une politique de colonisation israélienne active, soutenue par l’État, présentent un cas de figure unique pour vérifier l’hypothèse selon laquelle les tribunaux nationaux des États démocratiques, généralement, et ceux jouant le rôle de la plus haute instance judiciaire d’une puissance occupante, spécifiquement, parviennent à assurer la protection des droits et libertés fondamentaux et de la primauté du droit au niveau international. Le premier chapitre présente une étude, à la lumière du premier principe normatif énoncé ci-haut, des jugements rendus par la HCJ dans les dossiers contestant la légalité de la construction du mur à l’intérieur de la Cisjordanie et de la zone dite fermée (Seam Zone), ainsi que des zones de sécurité spéciales entourant les colonies. Le deuxième chapitre analyse, cette fois à la lumière du deuxième principe normatif, des jugements dans les dossiers mettant en cause des restrictions sur les déplacements imposées aux Palestiniens dans le but allégué de protéger la sécurité des colonies et/ou des colons. Le troisième chapitre jette un regard sur les jugements rendus dans les dossiers mettant en cause la légalité du tracé du mur à l’intérieur et sur le pourtour du territoire annexé de Jérusalem-Est. Les conclusions découlant de cette recherche se fondent sur des données tirées d’entrevues menées auprès d’avocats israéliens qui s’adressent régulièrement à la HCJ pour le compte de justiciables palestiniens.
Resumo:
A lógica fuzzy admite infinitos valores lógicos intermediários entre o falso e o verdadeiro. Com esse princípio, foi elaborado neste trabalho um sistema baseado em regras fuzzy, que indicam o índice de massa corporal de animais ruminantes com objetivo de obter o melhor momento para o abate. O sistema fuzzy desenvolvido teve como entradas as variáveis massa e altura, e a saída um novo índice de massa corporal, denominado Índice de Massa Corporal Fuzzy (IMC Fuzzy), que poderá servir como um sistema de detecção do momento de abate de bovinos, comparando-os entre si através das variáveis linguísticas )Muito BaixaM, ,BaixaB, ,MédiaM, ,AltaA e Muito AltaM. Para a demonstração e aplicação da utilização deste sistema fuzzy, foi feita uma análise de 147 vacas da raça Nelore, determinando os valores do IMC Fuzzy para cada animal e indicando a situação de massa corpórea de todo o rebanho. A validação realizada do sistema foi baseado em uma análise estatística, utilizando o coeficiente de correlação de Pearson 0,923, representando alta correlação positiva e indicando que o método proposto está adequado. Desta forma, o presente método possibilita a avaliação do rebanho, comparando cada animal do rebanho com seus pares do grupo, fornecendo desta forma um método quantitativo de tomada de decisão para o pecuarista. Também é possível concluir que o presente trabalho estabeleceu um método computacional baseado na lógica fuzzy capaz de imitar parte do raciocínio humano e interpretar o índice de massa corporal de qualquer tipo de espécie bovina e em qualquer região do País.
Resumo:
Repercussions of innovation adoption and diffusion studies have long been imperative to the success of novel introductions. However, perceptions and deductions of current innovation understandings have been changing over time. The paradigm shift from the goods-dominant (G-D) logic to the service-dominant (S-D) logic potentially makes the distinction between product (goods) innovation and service innovation redundant as the S-D logic lens views all innovations as service innovations (Vargo and Lusch, 2004; 2008; Lusch and Nambisan, 2015). From this perspective, product innovations are in essence service innovations, as goods serve as mere distribution mechanisms to deliver service. Nonetheless, the transition to such a broadened and transcending view of service innovation necessitates concurrently a change in the underlying models used to investigate innovation and its subsequent adoption. The present research addresses this gap by engendering a novel model for the most crucial period of service diffusion within the S-D logic context – the post-initial adoption phase, which demarcates an individual’s behavior after the initial adoption decision of a service. As a wellfounded understanding of service diffusion and the complementary innovation adoption still lingers in its infancy, the current study develops a model based on interdisciplinary domains mapping. Here fore, knowledge of the relatively established viral source domain is mapped to the comparatively undetermined target domain of service innovation adoption. To assess the model and test the importance of the explanatory variables, survey data from 750 respondents of a bank in Northern Germany is scrutinized by means of Structural Equation Modeling (SEM). The findings reveal that the continuance intention of a customer, actual usage of the service and the customer influencer value all constitute important postinitial adoption behavior that have meaningful implications for a successful service adoption. Second, the four constructs customer influencer value, organizational commitment, perceived usefulness and service customization are evidenced to have a differential impact on a iv customer’s post-initial adoption behavior. Third, this study indicates that post-initial adoption behavior further underlies the influence of a user’s age and besides that is also provoked by the internal and external environments of service adoption. Finally, this research amalgamates the broad view of service innovation by Nambisan and Lusch (2015) with the findings ensuing this enquiry’s model to arrive at a framework that it both, generalizable and practically applicable. Implications for academia and practitioners are captured along with avenues for future research.
Resumo:
We apply the collective consumption model of Browning, Chiappori and Lewbel (2006) to analyse economic well-being and poverty among the elderly. The model focuses on individual preferences, a consumption technology that captures the economies of scale of living in a couple, and a sharing rule that governs the intra-household allocation of resources. The model is applied to a time series of Dutch consumption expenditure surveys. Our empirical results indicate substantial economies of scale and a wifeís share that is increasing in total expenditures. We further calculated poverty rates by means of the collective consumption model. Collective poverty rates of widows and widowers turn out to be slightly lower than traditional ones based on a standard equivalence scale. Poverty among women (men) in elderly couples, however, seems to be heavily underestimated (overestimated) by the traditional approach. Finally, we analysed the impact of becoming a widow(er). Based on cross-sectional evidence, we find that the drop (increase) in material well-being following the husbandís death is substantial for women in high (low) expenditure couples. For men, the picture is reversed.
Resumo:
In this work, we present a sound and complete axiomatic system for conditional attribute implications (CAIs) in Triadic Concept Analysis (TCA). Our approach is strongly based on the Simplification paradigm which offers a more suitable way for automated reasoning than the one based on Armstrong’s Axioms. We also present an automated method to prove the derivability of a CAI from a set of CAI s.
Resumo:
Recent research has found that even preschoolers give more resources to others who have previously given resources to them, but the psychological bases of this reciprocity are unknown. In our study, a puppet distributed resources between herself and a child by taking some from a pile in front of the child or else by giving some from a pile in front of herself. Although the resulting distributions were identical, three- and five-year-olds reciprocated less generously when the puppet had taken rather than given resources. This suggests that children’s judgments about resource distribution are more about the social intentions of the distributor and the social framing of the distributional act than about the amount of resources obtained. In order to rule out that the differences in the children’s reciprocal behavior were merely due to experiencing gains and losses, we conducted a follow-up study. Here, three- and-five year olds won or lost resources in a lottery draw and could then freely give or take resources to/from a puppet, respectively. In this study, they did not respond differently after winning vs. losing resources.
Resumo:
The length of stay of preterm infants in a neonatology service has become an issue of a growing concern, namely considering, on the one hand, the mothers and infants health conditions and, on the other hand, the scarce healthcare facilities own resources. Thus, a pro-active strategy for problem solving has to be put in place, either to improve the quality-of-service provided or to reduce the inherent financial costs. Therefore, this work will focus on the development of a diagnosis decision support system in terms of a formal agenda built on a Logic Programming approach to knowledge representation and reasoning, complemented with a case-based problem solving methodology to computing, that caters for the handling of incomplete, unknown, or even contradictory in-formation. The proposed model has been quite accurate in predicting the length of stay (overall accuracy of 84.9%) and by reducing the computational time with values around 21.3%.