887 resultados para Computer forensic analysis
Resumo:
Castor bean cropping has great social and economic value, but its production has been affected by factors such as low quality seeds used for sowing. The quick and precise evaluation of seed quality by x-ray test is known as an effective method to evaluate seed lots, but little is known about the interpretation between of the type of radiographic image and the seed quality correlation. The potential of x-ray analysis as a marker of seed physiological quality and as an initial process for the implementation of the use of computer-assisted image analysis was investigated using castor bean seeds of the different cultivars. The seeds were classified according to internal morphology visualized in the radiography and subjected to the germination test, emergency and seedling growth rate. It was possible to identify the different types of internal tissues, morphological and physical damage in castor bean seeds using the x-ray test. Tissues generating translucent images, embryo deformation, or tissues with less than 50% of endosperm reserves or spotted, negatively affected the physiological potential of the seed lots. Radiographic analysis is effective as an instrument to improve castor bean seed lot quality. This non destructive analysis allows the prediction of seedling performance and enabled the selection of high-quality seeds under the standards of a sustainable and precision agriculture
Resumo:
Modern automobiles are no longer just mechanical tools. The electronics and computing services they are shipping with are making them not less than a computer. They are massive kinetic devices with sophisticated computing power. Most of the modern vehicles are made with the added connectivity in mind which may be vulnerable to outside attack. Researchers have shown that it is possible to infiltrate into a vehicle’s internal system remotely and control the physical entities such as steering and brakes. It is quite possible to experience such attacks on a moving vehicle and unable to use the controls. These massive connected computers can be life threatening as they are related to everyday lifestyle. First part of this research studied the attack surfaces in the automotive cybersecurity domain. It also illustrated the attack methods and capabilities of the damages. Online survey has been deployed as data collection tool to learn about the consumers’ usage of such vulnerable automotive services. The second part of the research portrayed the consumers’ privacy in automotive world. It has been found that almost hundred percent of modern vehicles has the capabilities to send vehicle diagnostic data as well as user generated data to their manufacturers, and almost thirty five percent automotive companies are collecting them already. Internet privacy has been studies before in many related domain but no privacy scale were matched for automotive consumers. It created the research gap and motivation for this thesis. A study has been performed to use well established consumers privacy scale – IUIPC to match with the automotive consumers’ privacy situation. Hypotheses were developed based on the IUIPC model for internet consumers’ privacy and they were studied by the finding from the data collection methods. Based on the key findings of the research, all the hypotheses were accepted and hence it is found that automotive consumers’ privacy did follow the IUIPC model under certain conditions. It is also found that a majority of automotive consumers use the services and devices that are vulnerable and prone to cyber-attacks. It is also established that there is a market for automotive cybersecurity services and consumers are willing to pay certain fees to avail that.
Resumo:
The effects oftwo types of small-group communication, synchronous computer-mediated and face-to-face, on the quantity and quality of verbal output were con^ared. Quantity was deiSned as the number of turns taken per minute, the number of Analysis-of-Speech units (AS-units) produced per minute, and the number ofwords produced per minute. Quality was defined as the number of words produced per AS-unit. In addition, the interaction of gender and type of communication was explored for any differences that existed in the output produced. Questionnaires were also given to participants to determine attitudes toward computer-mediated and face-to-face communication. Thirty intermediate-level students fi-om the Intensive English Language Program (lELP) at Brock University participated in the study, including 15 females and 15 males. Nonparametric tests, including the Wilcoxon matched-pairs test, Mann-Whitney U test, and Friedman test were used to test for significance at the p < .05 level. No significant differences were found in the effects of computer-mediated and face-to-face communication on the output produced during follow-up speaking sessions. However, the quantity and quality of interaction was significantly higher during face-to-face sessions than computer-mediated sessions. No significant differences were found in the output produced by males and females in these 2 conditions. While participants felt that the use of computer-mediated communication may aid in the development of certain language skills, they generally preferred face-to-face communication. These results differed fi-om previous studies that found a greater quantity and quality of output in addition to a greater equality of interaction produced during computer-mediated sessions in comparison to face-to-face sessions (Kern, 1995; Warschauer, 1996).
Resumo:
In this study, methods of media literacy instruction including analytic activities, production activities, and a combination of analytic and production activities were compared to determine their influence on grade 8 students' knowledge, attitudes, and behaviours towards commercials. The findings showed that media literacy instruction does improve media literacy skills. Specifically, activities that included an analytic component or an analytic and production component were significantly better than activities that included a production component. Participants that completed analytic or analytic and production activities were able to discern media-related terms, target audience, selling techniques, social values, and stereotypes in commercials better than participants that completed only production activities. The research findings also showed obstacles when teaching media literacy. When engaged in analytic activities, the difficulties included locating suitable resources, addressing the competition from commercials, encouraging written reflection, recognizing social values, and discussing racial stereotypes. When engaged in production activities, the difficulties were positioning recording stations, managing group work, organizing ideas, filming the footage, computer issues, and scheduling time. Strategies to overcome these obstacles are described.
Resumo:
This study was undertaken in order to determine the
effects of playing computer based text adventure games on
the reading comprehension gains of students. Forty-five
grade five students from one elementary school were
randomly assigned to experimental and control groups, and
were tested with regard to ability, achievement and reading
skills. An experimental treatment, consisting of playing
computer based interactive fiction games of the student's
choice for fifteen minutes each day over an eight-week
period, was administered. A comparison treatment engaged
the control group in sustained silent reading of materials of
the student's choice for an equal period of time. Following
the experimental period all students were post-tested with an
alternate form of the pre-test in reading skills, and gain
scores were analysed. It was found that there were no
significant differences in the gain scores of the experimental
and control groups for overall reading comprehenSion, but the
experimental group showed greater gains than the control
group in the structural analysis reading sub-skill. Extreme
variance in the data made generalization very difficult, but
the findings indicated a potential for computer based
interactive fiction as a useful tool for developing reading
sl
Resumo:
Flow injection analysis (FIA) was applied to the determination of both chloride ion and mercury in water. Conventional FIA was employed for the chloride study. Investigations of the Fe3 +/Hg(SCN)2/CI-,450 nm spectrophotometric system for chloride determination led to the discovery of an absorbance in the 250-260 nm region when Hg(SCN)2 and CI- are combined in solution, in the absence of iron(III). Employing an in-house FIA system, absorbance observed at 254 nm exhibited a linear relation from essentially 0 - 2000 Jlg ml- 1 injected chloride. This linear range spanning three orders of magnitude is superior to the Fe3+/Hg(SCN)2/CI- system currently employed by laboratories worldwide. The detection limit obtainable with the proposed method was determin~d to be 0.16 Jlg ml- 1 and the relative standard deviation was determined to be 3.5 % over the concentration range of 0-200 Jig ml- 1. Other halogen ions were found to interfere with chloride determination at 254 nm whereas cations did not interfere. This system was successfully applied to the determination of chloride ion in laboratory water. Sequential injection (SI)-FIA was employed for mercury determination in water with the PSA Galahad mercury amalgamation, and Merlin mercury fluorescence detection systems. Initial mercury in air determinations involved injections of mercury saturated air directly into the Galahad whereas mercury in water determinations involved solution delivery via peristaltic pump to a gas/liquid separator, after reduction by stannous chloride. A series of changes were made to the internal hardware and valving systems of the Galahad mercury preconcentrator. Sequential injection solution delivery replaced the continuous peristaltic pump system and computer control was implemented to control and integrate all aspects of solution delivery, sample preconcentration and signal processing. Detection limits currently obtainable with this system are 0.1 ng ml-1 HgO.
Resumo:
This study examines the efficiency of search engine advertising strategies employed by firms. The research setting is the online retailing industry, which is characterized by extensive use of Web technologies and high competition for market share and profitability. For Internet retailers, search engines are increasingly serving as an information gateway for many decision-making tasks. In particular, Search engine advertising (SEA) has opened a new marketing channel for retailers to attract new customers and improve their performance. In addition to natural (organic) search marketing strategies, search engine advertisers compete for top advertisement slots provided by search brokers such as Google and Yahoo! through keyword auctions. The rationale being that greater visibility on a search engine during a keyword search will capture customers' interest in a business and its product or service offerings. Search engines account for most online activities today. Compared with the slow growth of traditional marketing channels, online search volumes continue to grow at a steady rate. According to the Search Engine Marketing Professional Organization, spending on search engine marketing by North American firms in 2008 was estimated at $13.5 billion. Despite the significant role SEA plays in Web retailing, scholarly research on the topic is limited. Prior studies in SEA have focused on search engine auction mechanism design. In contrast, research on the business value of SEA has been limited by the lack of empirical data on search advertising practices. Recent advances in search and retail technologies have created datarich environments that enable new research opportunities at the interface of marketing and information technology. This research uses extensive data from Web retailing and Google-based search advertising and evaluates Web retailers' use of resources, search advertising techniques, and other relevant factors that contribute to business performance across different metrics. The methods used include Data Envelopment Analysis (DEA), data mining, and multivariate statistics. This research contributes to empirical research by analyzing several Web retail firms in different industry sectors and product categories. One of the key findings is that the dynamics of sponsored search advertising vary between multi-channel and Web-only retailers. While the key performance metrics for multi-channel retailers include measures such as online sales, conversion rate (CR), c1ick-through-rate (CTR), and impressions, the key performance metrics for Web-only retailers focus on organic and sponsored ad ranks. These results provide a useful contribution to our organizational level understanding of search engine advertising strategies, both for multi-channel and Web-only retailers. These results also contribute to current knowledge in technology-driven marketing strategies and provide managers with a better understanding of sponsored search advertising and its impact on various performance metrics in Web retailing.
Resumo:
L'interface cerveau-ordinateur (ICO) décode les signaux électriques du cerveau requise par l’électroencéphalographie et transforme ces signaux en commande pour contrôler un appareil ou un logiciel. Un nombre limité de tâches mentales ont été détectés et classifier par différents groupes de recherche. D’autres types de contrôle, par exemple l’exécution d'un mouvement du pied, réel ou imaginaire, peut modifier les ondes cérébrales du cortex moteur. Nous avons utilisé un ICO pour déterminer si nous pouvions faire une classification entre la navigation de type marche avant et arrière, en temps réel et en temps différé, en utilisant différentes méthodes. Dix personnes en bonne santé ont participé à l’expérience sur les ICO dans un tunnel virtuel. L’expérience fut a était divisé en deux séances (48 min chaque). Chaque séance comprenait 320 essais. On a demandé au sujets d’imaginer un déplacement avant ou arrière dans le tunnel virtuel de façon aléatoire d’après une commande écrite sur l'écran. Les essais ont été menés avec feedback. Trois électrodes ont été montées sur le scalp, vis-à-vis du cortex moteur. Durant la 1re séance, la classification des deux taches (navigation avant et arrière) a été réalisée par les méthodes de puissance de bande, de représentation temporel-fréquence, des modèles autorégressifs et des rapports d’asymétrie du rythme β avec classificateurs d’analyse discriminante linéaire et SVM. Les seuils ont été calculés en temps différé pour former des signaux de contrôle qui ont été utilisés en temps réel durant la 2e séance afin d’initier, par les ondes cérébrales de l'utilisateur, le déplacement du tunnel virtuel dans le sens demandé. Après 96 min d'entrainement, la méthode « online biofeedback » de la puissance de bande a atteint une précision de classification moyenne de 76 %, et la classification en temps différé avec les rapports d’asymétrie et puissance de bande, a atteint une précision de classification d’environ 80 %.
Resumo:
L’agression sexuelle (AS) commise envers les enfants est un sujet complexe à enquêter et les allégations reposent souvent exclusivement sur le témoignage de l’enfant. Cependant, même quand l’enfant divulgue une AS, il peut être réticent à révéler certains détails personnels et gênants de l’AS à un étranger. Étant donné qu’il n'est pas toujours possible d'obtenir le consentement de filmer et qu’il est relativement difficile de mesurer l’attitude non verbale de l’enfant et celui de l’enquêteur au cours des entrevues d’investigations, cette recherche a été novatrice dans sa création d’échelles verbales de telles attitudes. Afin de déterminer la corrélation de l’attitude des enquêteurs et la collaboration des enfants, 90 entrevues d’enfants âgés de 4 à 13 ans ont été analysées. Les entrevues ont été enregistrées sur bande audio, transcrites et codifiées à l'aide des sous-échelles verbales d'attitudes soutenantes et non-soutenantes des enquêteurs ainsi que d’attitudes de résistance et de coopération de la part de l'enfant. La proportion des détails sur l’AS fournie par les enfants a également été calculée. Afin de comparer les entrevues avec et sans le protocole du National Institute of Child Health and Human Development (NICHD), une MANCOVA, contrôlant pour l’âge de l’enfant et la proportion de questions ouvertes, démontre tel qu’attendu que les entrevues avec le protocole obtiennent plus de détails fournis à la suite des questions ouvertes que les entrevues sans le protocole. Cependant, aucune différence ne ressort quant aux attitudes de l’enfant et celle de l’enquêteur. Afin de trouver le meilleur prédicteur de la quantité de détails dévoilés par les enfants, une analyse de régression multiple hiérarchique a été faite. Après avoir contrôlé pour l'âge de l’enfant, l’utilisation du protocole et la proportion de questions ouvertes, la résistance de l’enfant et l’attitude non-soutenante de l’enquêteur expliquent 28 % supplémentaire de la variance, tandis que la variance totale expliquée par le modèle est de 58%. De plus, afin de déterminer si la collaboration de l’enfant et l’attitude de l’enquêteur varient en fonction de l’âge des enfants, une MANOVA démontre que les enquêteurs se comportent similairement, quel que soit l'âge des enfants. Ceci, malgré le fait que les jeunes enfants sont généralement plus réticents et coopèrent significativement moins bien que les préadolescents. Finalement, une régression multiple hiérarchique démontre que le soutien de l'enquêteur est le meilleur prédicteur de la collaboration des enfants, au-delà des caractéristiques de l'enfant et de l’AS. Bien que l’utilisation du protocole NICHD ait permis des progrès considérables dans la manière d’interroger les enfants, augmentant la proportion de détails obtenus par des questions ouvertes/rappel libre et amplifiant la crédibilité du témoignage, l’adhésion au protocole n’est pas en soi suffisante pour convaincre des jeunes enfants de parler en détail d’une AS à un inconnu. Les résultats de cette thèse ont une valeur scientifique et contribuent à enrichir les connaissances théoriques sur les attitudes de l'enfant et de l'enquêteur exprimées lors des entrevues. Même si les enquêteurs de cette étude offrent plus de soutien aux enfants résistants, indépendamment de leur âge, pour promouvoir la divulgation détaillée de l’AS, de meilleures façons de contrer les attitudes de résistance exprimées par les jeunes enfants et une minimisation des attitudes non-soutenantes lors des entrevues sont nécessaires.
Resumo:
À mesure que la population des personnes agées dans les pays industrialisés augmente au fil de années, les ressources nécessaires au maintien du niveau de vie de ces personnes augmentent aussi. Des statistiques montrent que les chutes sont l’une des principales causes d’hospitalisation chez les personnes agées, et, de plus, il a été démontré que le risque de chute d’une personne agée a une correlation avec sa capacité de maintien de l’équilibre en étant debout. Il est donc d’intérêt de développer un système automatisé pour analyser l’équilibre chez une personne, comme moyen d’évaluation objective. Dans cette étude, nous avons proposé l’implémentation d’un tel système. En se basant sur une installation simple contenant une seule caméra sur un trépied, on a développé un algorithme utilisant une implémentation de la méthode de détection d’objet de Viola-Jones, ainsi qu’un appariement de gabarit, pour suivre autant le mouvement latéral que celui antérieur-postérieur d’un sujet. On a obtenu des bons résultats avec les deux types de suivi, cependant l’algorithme est sensible aux conditions d’éclairage, ainsi qu’à toute source de bruit présent dans les images. Il y aurait de l’intérêt, comme développement futur, d’intégrer les deux types de suivi, pour ainsi obtenir un seul ensemble de données facile à interpréter.
Resumo:
L’analyse de la marche a émergé comme l’un des domaines médicaux le plus im- portants récemment. Les systèmes à base de marqueurs sont les méthodes les plus fa- vorisées par l’évaluation du mouvement humain et l’analyse de la marche, cependant, ces systèmes nécessitent des équipements et de l’expertise spécifiques et sont lourds, coûteux et difficiles à utiliser. De nombreuses approches récentes basées sur la vision par ordinateur ont été développées pour réduire le coût des systèmes de capture de mou- vement tout en assurant un résultat de haute précision. Dans cette thèse, nous présentons notre nouveau système d’analyse de la démarche à faible coût, qui est composé de deux caméras vidéo monoculaire placées sur le côté gauche et droit d’un tapis roulant. Chaque modèle 2D de la moitié du squelette humain est reconstruit à partir de chaque vue sur la base de la segmentation dynamique de la couleur, l’analyse de la marche est alors effectuée sur ces deux modèles. La validation avec l’état de l’art basée sur la vision du système de capture de mouvement (en utilisant le Microsoft Kinect) et la réalité du ter- rain (avec des marqueurs) a été faite pour démontrer la robustesse et l’efficacité de notre système. L’erreur moyenne de l’estimation du modèle de squelette humain par rapport à la réalité du terrain entre notre méthode vs Kinect est très prometteur: les joints des angles de cuisses (6,29◦ contre 9,68◦), jambes (7,68◦ contre 11,47◦), pieds (6,14◦ contre 13,63◦), la longueur de la foulée (6.14cm rapport de 13.63cm) sont meilleurs et plus stables que ceux de la Kinect, alors que le système peut maintenir une précision assez proche de la Kinect pour les bras (7,29◦ contre 6,12◦), les bras inférieurs (8,33◦ contre 8,04◦), et le torse (8,69◦contre 6,47◦). Basé sur le modèle de squelette obtenu par chaque méthode, nous avons réalisé une étude de symétrie sur différentes articulations (coude, genou et cheville) en utilisant chaque méthode sur trois sujets différents pour voir quelle méthode permet de distinguer plus efficacement la caractéristique symétrie / asymétrie de la marche. Dans notre test, notre système a un angle de genou au maximum de 8,97◦ et 13,86◦ pour des promenades normale et asymétrique respectivement, tandis que la Kinect a donné 10,58◦et 11,94◦. Par rapport à la réalité de terrain, 7,64◦et 14,34◦, notre système a montré une plus grande précision et pouvoir discriminant entre les deux cas.
Resumo:
A method for computer- aided diagnosis of micro calcification clusters in mammograms images presented . Micro calcification clus.eni which are an early sign of bread cancer appear as isolated bright spots in mammograms. Therefore they correspond to local maxima of the image. The local maxima of the image is lint detected and they are ranked according to it higher-order statistical test performed over the sub band domain data
Resumo:
Many finite elements used in structural analysis possess deficiencies like shear locking, incompressibility locking, poor stress predictions within the element domain, violent stress oscillation, poor convergence etc. An approach that can probably overcome many of these problems would be to consider elements in which the assumed displacement functions satisfy the equations of stress field equilibrium. In this method, the finite element will not only have nodal equilibrium of forces, but also have inner stress field equilibrium. The displacement interpolation functions inside each individual element are truncated polynomial solutions of differential equations. Such elements are likely to give better solutions than the existing elements.In this thesis, a new family of finite elements in which the assumed displacement function satisfies the differential equations of stress field equilibrium is proposed. A general procedure for constructing the displacement functions and use of these functions in the generation of elemental stiffness matrices has been developed. The approach to develop field equilibrium elements is quite general and various elements to analyse different types of structures can be formulated from corresponding stress field equilibrium equations. Using this procedure, a nine node quadrilateral element SFCNQ for plane stress analysis, a sixteen node solid element SFCSS for three dimensional stress analysis and a four node quadrilateral element SFCFP for plate bending problems have been formulated.For implementing these elements, computer programs based on modular concepts have been developed. Numerical investigations on the performance of these elements have been carried out through standard test problems for validation purpose. Comparisons involving theoretical closed form solutions as well as results obtained with existing finite elements have also been made. It is found that the new elements perform well in all the situations considered. Solutions in all the cases converge correctly to the exact values. In many cases, convergence is faster when compared with other existing finite elements. The behaviour of field consistent elements would definitely generate a great deal of interest amongst the users of the finite elements.
Resumo:
This thesis deals with the use of simulation as a problem-solving tool to solve a few logistic system related problems. More specifically it relates to studies on transport terminals. Transport terminals are key elements in the supply chains of industrial systems. One of the problems related to use of simulation is that of the multiplicity of models needed to study different problems. There is a need for development of methodologies related to conceptual modelling which will help reduce the number of models needed. Three different logistic terminal systems Viz. a railway yard, container terminal of apart and airport terminal were selected as cases for this study. The standard methodology for simulation development consisting of system study and data collection, conceptual model design, detailed model design and development, model verification and validation, experimentation, and analysis of results, reporting of finding were carried out. We found that models could be classified into tightly pre-scheduled, moderately pre-scheduled and unscheduled systems. Three types simulation models( called TYPE 1, TYPE 2 and TYPE 3) of various terminal operations were developed in the simulation package Extend. All models were of the type discrete-event simulation. Simulation models were successfully used to help solve strategic, tactical and operational problems related to three important logistic terminals as set in our objectives. From the point of contribution to conceptual modelling we have demonstrated that clubbing problems into operational, tactical and strategic and matching them with tightly pre-scheduled, moderately pre-scheduled and unscheduled systems is a good workable approach which reduces the number of models needed to study different terminal related problems.
Resumo:
Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.