993 resultados para Processing Resources
Resumo:
A dissociation between two putative measures of resource allocation skin conductance responding, and secondary task reaction time (RT), has been observed during auditory discrimination tasks. Four experiments investigated the time course of the dissociation effect with a visual discrimination task. participants were presented with circles and ellipses and instructed to count the number of longer-than-usual presentations of one shape (task-relevant) and to ignore presentations of the other shape (task-irrelevant). Concurrent with this task, participants made a speeded motor response to an auditory probe. Experiment 1 showed that skin conductance responses were larger during task-relevant stimuli than during task-irrelevant stimuli, whereas RT to probes presented at 150 ms following shape onset was slower during task-irrelevant stimuli. Experiments 2 to 4 found slower RT during task-irrelevant stimuli at probes presented at 300 ms before shape onset until 150 ms following shape onset. At probes presented 3,000 and 4,000 ms following shape onset probe RT was slower during task-relevant stimuli. The similarities between the observed time course and the so-called psychological refractory period (PRF) effect are discussed.
Resumo:
The effect that the difficulty of the discrimination between task-relevant and task-irrelevant stimuli has on the relationship between skin conductance orienting and secondary task reaction time (RT) was examined. Participants (N = 72) counted the number of longer-than-usual presentations of one shape (task-relevant) and ignored presentations of another shape (task-irrelevant). The difficulty of discriminating between the two shapes varied across three groups (low, medium, and high difficulty). Simultaneous with the primary counting task, participants performed a secondary RT task to acoustic probes presented 50, 150, and 2000 ms following shape onset. Skin conductance orienting was larger, and secondary RT at the 2000 ms probe position was slower during task-relevant shapes than during task-irrelevant shapes in the low-difficulty group. This difference declined as the discrimination difficulty was increased, such that there was no difference in the high-difficulty group. Secondary RT was slower during task-irrelevant shapes than during task-relevant shapes only in the medium-difficulty group-and only at the 150 ms probe position in the first half of the experiment. The close relationship between autonomic orienting and secondary RT at the 2000 ms probe position suggests that orienting reflects the resource allocation that results from the number of matching features between a stimulus input and a mental representation primed as significant.
Resumo:
In four experiments ERPs to emotional (negative and positive) and neutral stimuli were examined as a function of participants’ trait anxiety and repressivedefensiveness. The experiments investigated the time course of attentional bias in the processing of such stimuli. Pictures of angry, happy, and neutral faces were used in two of the experiments and pictures ofmutilated, happy, and neutral faces were used in the others. ERP’s to emotional and neutral stimuli were recorded from parietal, temporal, and frontal sites. Analysis of the P3 component indicated that the peak magnitude of the P3 at the parietal and temporal sites reflected an interactive function of trait anxiety and defensiveness. Repressors (low reported anxiety, high defensiveness) showed a consistent pattern of greater P3 magnitude at the parietal and temporal sites for emotional faces (angry, happy, and mutilated) than did high-anxious and low-anxious participants. Participants did not differ in P3 magnitude when ERPs to neutral stimuli were investigated (e.g., a fixation cross). The findings indicate that Repressors dedicate greater processing resources to emotional material, as compared to neutral material, than either the high-anxious or low-anxious individuals. Results of the four experiments are discussed within the theoretical framework of Derakshan and Eysenck (1998). The importance of understanding the role of differences in information processing, in the experience and avoidance of emotional information, as a function of trait anxiety and defensiveness is emphasized.
Resumo:
Information processing accounts propose that autonomic orienting reflects the amount of resources allocated to process a stimulus. However, secondary task reaction time (RT), a supposed measure of processing resources, has shown a dissociation from autonomic orienting. The present study tested the hypothesis that secondary task RT reflects a serial processing mechanism. Participants (N = 24) were presented with circle and ellipse shapes and asked to count the number of longer-than-usual presentations of one shape (task-relevant) and to ignore presentations of a second shape (task-irrelevant). Concurrent with the counting task, participants performed a secondary RT task to an auditory probe presented at either a high or low intensity and at two different probe positions following shape onset (50 and 300 ms). Electrodermal orienting was larger during task-relevant shapes than during task-irrelevant shapes, but secondary task RT to the high-intensity probe was slower during the latter. In addition, an underadditive interaction between probe stimulus intensity and probe position was found in secondary RT. The findings are consistent with a serial processing model of secondary RT and suggest that the notion of processing stages should be incorporated into current information-processing models of autonomic orienting.
Resumo:
Hard real- time multiprocessor scheduling has seen, in recent years, the flourishing of semi-partitioned scheduling algorithms. This category of scheduling schemes combines elements of partitioned and global scheduling for the purposes of achieving efficient utilization of the system’s processing resources with strong schedulability guarantees and with low dispatching overheads. The sub-class of slot-based “task-splitting” scheduling algorithms, in particular, offers very good trade-offs between schedulability guarantees (in the form of high utilization bounds) and the number of preemptions/migrations involved. However, so far there did not exist unified scheduling theory for such algorithms; each one was formulated in its own accompanying analysis. This article changes this fragmented landscape by formulating a more unified schedulability theory covering the two state-of-the-art slot-based semi-partitioned algorithms, S-EKG and NPS-F (both fixed job-priority based). This new theory is based on exact schedulability tests, thus also overcoming many sources of pessimism in existing analysis. In turn, since schedulability testing guides the task assignment under the schemes in consideration, we also formulate an improved task assignment procedure. As the other main contribution of this article, and as a response to the fact that many unrealistic assumptions, present in the original theory, tend to undermine the theoretical potential of such scheduling schemes, we identified and modelled into the new analysis all overheads incurred by the algorithms in consideration. The outcome is a new overhead-aware schedulability analysis that permits increased efficiency and reliability. The merits of this new theory are evaluated by an extensive set of experiments.
Resumo:
Face à estagnação da tecnologia uniprocessador registada na passada década, aos principais fabricantes de microprocessadores encontraram na tecnologia multi-core a resposta `as crescentes necessidades de processamento do mercado. Durante anos, os desenvolvedores de software viram as suas aplicações acompanhar os ganhos de performance conferidos por cada nova geração de processadores sequenciais, mas `a medida que a capacidade de processamento escala em função do número de processadores, a computação sequencial tem de ser decomposta em várias partes concorrentes que possam executar em paralelo, para que possam utilizar as unidades de processamento adicionais e completar mais rapidamente. A programação paralela implica um paradigma completamente distinto da programação sequencial. Ao contrário dos computadores sequenciais tipificados no modelo de Von Neumann, a heterogeneidade de arquiteturas paralelas requer modelos de programação paralela que abstraiam os programadores dos detalhes da arquitectura e simplifiquem o desenvolvimento de aplicações concorrentes. Os modelos de programação paralela mais populares incitam os programadores a identificar instruções concorrentes na sua lógica de programação, e a especificá-las sob a forma de tarefas que possam ser atribuídas a processadores distintos para executarem em simultâneo. Estas tarefas são tipicamente lançadas durante a execução, e atribuídas aos processadores pelo motor de execução subjacente. Como os requisitos de processamento costumam ser variáveis, e não são conhecidos a priori, o mapeamento de tarefas para processadores tem de ser determinado dinamicamente, em resposta a alterações imprevisíveis dos requisitos de execução. `A medida que o volume da computação cresce, torna-se cada vez menos viável garantir as suas restrições temporais em plataformas uniprocessador. Enquanto os sistemas de tempo real se começam a adaptar ao paradigma de computação paralela, há uma crescente aposta em integrar execuções de tempo real com aplicações interativas no mesmo hardware, num mundo em que a tecnologia se torna cada vez mais pequena, leve, ubíqua, e portável. Esta integração requer soluções de escalonamento que simultaneamente garantam os requisitos temporais das tarefas de tempo real e mantenham um nível aceitável de QoS para as restantes execuções. Para tal, torna-se imperativo que as aplicações de tempo real paralelizem, de forma a minimizar os seus tempos de resposta e maximizar a utilização dos recursos de processamento. Isto introduz uma nova dimensão ao problema do escalonamento, que tem de responder de forma correcta a novos requisitos de execução imprevisíveis e rapidamente conjeturar o mapeamento de tarefas que melhor beneficie os critérios de performance do sistema. A técnica de escalonamento baseado em servidores permite reservar uma fração da capacidade de processamento para a execução de tarefas de tempo real, e assegurar que os efeitos de latência na sua execução não afectam as reservas estipuladas para outras execuções. No caso de tarefas escalonadas pelo tempo de execução máximo, ou tarefas com tempos de execução variáveis, torna-se provável que a largura de banda estipulada não seja consumida por completo. Para melhorar a utilização do sistema, os algoritmos de partilha de largura de banda (capacity-sharing) doam a capacidade não utilizada para a execução de outras tarefas, mantendo as garantias de isolamento entre servidores. Com eficiência comprovada em termos de espaço, tempo, e comunicação, o mecanismo de work-stealing tem vindo a ganhar popularidade como metodologia para o escalonamento de tarefas com paralelismo dinâmico e irregular. O algoritmo p-CSWS combina escalonamento baseado em servidores com capacity-sharing e work-stealing para cobrir as necessidades de escalonamento dos sistemas abertos de tempo real. Enquanto o escalonamento em servidores permite partilhar os recursos de processamento sem interferências a nível dos atrasos, uma nova política de work-stealing que opera sobre o mecanismo de capacity-sharing aplica uma exploração de paralelismo que melhora os tempos de resposta das aplicações e melhora a utilização do sistema. Esta tese propõe uma implementação do algoritmo p-CSWS para o Linux. Em concordância com a estrutura modular do escalonador do Linux, ´e definida uma nova classe de escalonamento que visa avaliar a aplicabilidade da heurística p-CSWS em circunstâncias reais. Ultrapassados os obstáculos intrínsecos `a programação da kernel do Linux, os extensos testes experimentais provam que o p-CSWS ´e mais do que um conceito teórico atrativo, e que a exploração heurística de paralelismo proposta pelo algoritmo beneficia os tempos de resposta das aplicações de tempo real, bem como a performance e eficiência da plataforma multiprocessador.
Resumo:
Nowadays, existing 3D scanning cameras and microscopes in the market use digital or discrete sensors, such as CCDs or CMOS for object detection applications. However, these combined systems are not fast enough for some application scenarios since they require large data processing resources and can be cumbersome. Thereby, there is a clear interest in exploring the possibilities and performances of analogue sensors such as arrays of position sensitive detectors with the final goal of integrating them in 3D scanning cameras or microscopes for object detection purposes. The work performed in this thesis deals with the implementation of prototype systems in order to explore the application of object detection using amorphous silicon position sensors of 32 and 128 lines which were produced in the clean room at CENIMAT-CEMOP. During the first phase of this work, the fabrication and the study of the static and dynamic specifications of the sensors as well as their conditioning in relation to the existing scientific and technological knowledge became a starting point. Subsequently, relevant data acquisition and suitable signal processing electronics were assembled. Various prototypes were developed for the 32 and 128 array PSD sensors. Appropriate optical solutions were integrated to work together with the constructed prototypes, allowing the required experiments to be carried out and allowing the achievement of the results presented in this thesis. All control, data acquisition and 3D rendering platform software was implemented for the existing systems. All these components were combined together to form several integrated systems for the 32 and 128 line PSD 3D sensors. The performance of the 32 PSD array sensor and system was evaluated for machine vision applications such as for example 3D object rendering as well as for microscopy applications such as for example micro object movement detection. Trials were also performed involving the 128 array PSD sensor systems. Sensor channel non-linearities of approximately 4 to 7% were obtained. Overall results obtained show the possibility of using a linear array of 32/128 1D line sensors based on the amorphous silicon technology to render 3D profiles of objects. The system and setup presented allows 3D rendering at high speeds and at high frame rates. The minimum detail or gap that can be detected by the sensor system is approximately 350 μm when using this current setup. It is also possible to render an object in 3D within a scanning angle range of 15º to 85º and identify its real height as a function of the scanning angle and the image displacement distance on the sensor. Simple and not so simple objects, such as a rubber and a plastic fork, can be rendered in 3D properly and accurately also at high resolution, using this sensor and system platform. The nip structure sensor system can detect primary and even derived colors of objects by a proper adjustment of the integration time of the system and by combining white, red, green and blue (RGB) light sources. A mean colorimetric error of 25.7 was obtained. It is also possible to detect the movement of micrometer objects using the 32 PSD sensor system. This kind of setup offers the possibility to detect if a micro object is moving, what are its dimensions and what is its position in two dimensions, even at high speeds. Results show a non-linearity of about 3% and a spatial resolution of < 2µm.
Resumo:
Using game theory, we developed a kin-selection model to investigate the consequences of local competition and inbreeding depression on the evolution of natal dispersal. Mating systems have the potential to favor strong sex biases in dispersal because sex differences in potential reproductive success affect the balance between local resource competition and local mate competition. No bias is expected when local competition equally affects males and females, as happens in monogamous systems and also in polygynous or promiscuous ones as long as female fitness is limited by extrinsic factors (breeding resources). In contrast, a male-biased dispersal is predicted when local mate competition exceeds local resource competition, as happens under polygyny/promiscuity when female fitness is limited by intrinsic factors (maximal rate of processing resources rather than resources themselves). This bias is reinforced by among-sex interactions: female philopatry enhances breeding opportunities for related males, while male dispersal decreases the chances that related females will inbreed. These results meet empirical patterns in mammals: polygynous/promiscuous species usually display a male-biased dispersal, while both sexes disperse in monogamous species. A parallel is drawn with sex-ratio theory, which also predicts biases toward the sex that suffers less from local competition. Optimal sex ratios and optimal sex-specific dispersal show mutual dependence, which argues for the development of coevolution models.
Resumo:
Selection of action may rely on external guidance or be motivated internally, engaging partially distinct cerebral networks. With age, there is an increased allocation of sensorimotor processing resources, accompanied by a reduced differentiation between the two networks of action selection. The present study examines the age effects on the motor-related oscillatory patterns related to the preparation of externally and internally guided movements. Thirty-two older and 30 younger adults underwent three delayed motor tasks with S1 as preparatory and S2 as imperative cue: Full, laterality instructed by S1 (external guidance); Free, laterality freely selected (internal guidance); None, laterality instructed by S2 (no preparation). Electroencephalogram (EEG) was recorded using 64 surface electrodes. Motor-Related Amplitude Asymmetries (MRAA), indexing the lateralization of oscillatory activities, were analyzed within the S1-S2 interval in the mu (9-12 Hz) and low beta (15-20 Hz) motor-related frequency bands. Reaction times to S2 were slower in older than younger subjects, and slower in the Free than in the Full condition in older subjects only. In the Full condition, there were significant mu MRAA in both age groups, and significant low beta MRAA only in older adults. The Free condition was associated with large mu MRAA in younger adults and limited low beta MRAA in older adults. In younger subjects, the lateralization of mu activity in both Full and Free conditions indicated effective external and internal motor preparation. In older subjects, external motor preparation was associated with lateralization of low beta in addition with mu activity, compatible with an increase of motor-related resources. In contrast, absence of mu and limited low beta lateralization in internal motor preparation was concomitant with reaction time slowing and suggested less efficient cerebral processes subtending free movement selection in older adults, indicating reduced capacity for internally driven action with age.
Resumo:
L’objectif principal de cette thèse était de quantifier et comparer l’effort requis pour reconnaître la parole dans le bruit chez les jeunes adultes et les personnes aînées ayant une audition normale et une acuité visuelle normale (avec ou sans lentille de correction de la vue). L’effort associé à la perception de la parole est lié aux ressources attentionnelles et cognitives requises pour comprendre la parole. La première étude (Expérience 1) avait pour but d’évaluer l’effort associé à la reconnaissance auditive de la parole (entendre un locuteur), tandis que la deuxième étude (Expérience 2) avait comme but d’évaluer l’effort associé à la reconnaissance auditivo-visuelle de la parole (entendre et voir le visage d’un locuteur). L’effort fut mesuré de deux façons différentes. D’abord par une approche comportementale faisant appel à un paradigme expérimental nommé double tâche. Il s’agissait d’une tâche de reconnaissance de mot jumelée à une tâche de reconnaissance de patrons vibro-tactiles. De plus, l’effort fut quantifié à l’aide d’un questionnaire demandant aux participants de coter l’effort associé aux tâches comportementales. Les deux mesures d’effort furent utilisées dans deux conditions expérimentales différentes : 1) niveau équivalent – c'est-à-dire lorsque le niveau du bruit masquant la parole était le même pour tous les participants et, 2) performance équivalente – c'est-à-dire lorsque le niveau du bruit fut ajusté afin que les performances à la tâche de reconnaissance de mots soient identiques pour les deux groupes de participant. Les niveaux de performance obtenus pour la tâche vibro-tactile ont révélé que les personnes aînées fournissent plus d’effort que les jeunes adultes pour les deux conditions expérimentales, et ce, quelle que soit la modalité perceptuelle dans laquelle les stimuli de la parole sont présentés (c.-à.-d., auditive seulement ou auditivo-visuelle). Globalement, le ‘coût’ associé aux performances de la tâche vibro-tactile était au plus élevé pour les personnes aînées lorsque la parole était présentée en modalité auditivo-visuelle. Alors que les indices visuels peuvent améliorer la reconnaissance auditivo-visuelle de la parole, nos résultats suggèrent qu’ils peuvent aussi créer une charge additionnelle sur les ressources utilisées pour traiter l’information. Cette charge additionnelle a des conséquences néfastes sur les performances aux tâches de reconnaissance de mots et de patrons vibro-tactiles lorsque celles-ci sont effectuées sous des conditions de double tâche. Conformément aux études antérieures, les coefficients de corrélations effectuées à partir des données de l’Expérience 1 et de l’Expérience 2 soutiennent la notion que les mesures comportementales de double tâche et les réponses aux questionnaires évaluent différentes dimensions de l’effort associé à la reconnaissance de la parole. Comme l’effort associé à la perception de la parole repose sur des facteurs auditifs et cognitifs, une troisième étude fut complétée afin d’explorer si la mémoire auditive de travail contribue à expliquer la variance dans les données portant sur l’effort associé à la perception de la parole. De plus, ces analyses ont permis de comparer les patrons de réponses obtenues pour ces deux facteurs après des jeunes adultes et des personnes aînées. Pour les jeunes adultes, les résultats d’une analyse de régression séquentielle ont démontré qu’une mesure de la capacité auditive (taille de l’empan) était reliée à l’effort, tandis qu’une mesure du traitement auditif (rappel alphabétique) était reliée à la précision avec laquelle les mots étaient reconnus lorsqu’ils étaient présentés sous les conditions de double tâche. Cependant, ces mêmes relations n’étaient pas présentes dans les données obtenues pour le groupe de personnes aînées ni dans les données obtenues lorsque les tâches de reconnaissance de la parole étaient effectuées en modalité auditivo-visuelle. D’autres études sont nécessaires pour identifier les facteurs cognitifs qui sous-tendent l’effort associé à la perception de la parole, et ce, particulièrement chez les personnes aînées.
Resumo:
Phenol and cresols represent a good example of primary chemical building blocks of which 2.8 million tons are currently produced in Europe each year. Currently, these primary phenolic building blocks are produced by refining processes from fossil hydrocarbons: 5% of the world-wide production comes from coal (which contains 0.2% of phenols) through the distillation of the tar residue after the production of coke, while 95% of current world production of phenol is produced by the distillation and cracking of crude oil. In nature phenolic compounds are present in terrestrial higher plants and ferns in several different chemical structures while they are essentially absent in lower organisms and in animals. Biomass (which contain 3-8% of phenols) represents a substantial source of secondary chemical building blocks presently underexploited. These phenolic derivatives are currently used in tens thousand of tons to produce high cost products such as food additives and flavours (i.e. vanillin), fine chemicals (i.e. non-steroidal anti-inflammatory drugs such as ibuprofen or flurbiprofen) and polymers (i.e. poly p-vinylphenol, a photosensitive polymer for electronic and optoelectronic applications). European agrifood waste represents a low cost abundant raw material (250 millions tons per year) which does not subtract land use and processing resources from necessary sustainable food production. The class of phenolic compounds is essentially constituted by simple phenols, phenolic acids, hydroxycinnamic acid derivatives, flavonoids and lignans. As in the case of coke production, the removal of the phenolic contents from biomass upgrades also the residual biomass. Focusing on the phenolic component of agrifood wastes, huge processing and marketing opportunities open since phenols are used as chemical intermediates for a large number of applications, ranging from pharmaceuticals, agricultural chemicals, food ingredients etc. Following this approach we developed a biorefining process to recover the phenolic fraction of wheat bran based on enzymatic commercial biocatalysts in completely water based process, and polymeric resins with the aim of substituting secondary chemical building blocks with the same compounds naturally present in biomass. We characterized several industrial enzymatic product for their ability to hydrolize the different molecular features that are present in wheat bran cell walls structures, focusing on the hydrolysis of polysaccharidic chains and phenolics cross links. This industrial biocatalysts were tested on wheat bran and the optimized process allowed to liquefy up to the 60 % of the treated matter. The enzymatic treatment was also able to solubilise up to the 30 % of the alkali extractable ferulic acid. An extraction process of the phenolic fraction of the hydrolyzed wheat bran based on an adsorbtion/desorption process on styrene-polyvinyl benzene weak cation-exchange resin Amberlite IRA 95 was developed. The efficiency of the resin was tested on different model system containing ferulic acid and the adsorption and desorption working parameters optimized for the crude enzymatic hydrolyzed wheat bran. The extraction process developed had an overall yield of the 82% and allowed to obtain concentrated extracts containing up to 3000 ppm of ferulic acid. The crude enzymatic hydrolyzed wheat bran and the concentrated extract were finally used as substrate in a bioconversion process of ferulic acid into vanillin through resting cells fermentation. The bioconversion process had a yields in vanillin of 60-70% within 5-6 hours of fermentation. Our findings are the first step on the way to demonstrating the economical feasibility for the recovery of biophenols from agrifood wastes through a whole crop approach in a sustainable biorefining process.
Resumo:
Prospective memory (ProM) is the ability to remember and carry out a planned intention in the future. ProM performance can be improved by instructing participants to prioritize the ProM task over the ongoing task. However, the improvement of ProM performance by emphasizing the relative importance typically restricted to situations in which the overlap between processing requirements of the ProM task and the ongoing task is low. Thus, additional processing resources are allocated to the ProM task and consequently, a cost emerges for the ongoing task. The aim of the present study was to further investigate this relationship. Participants were asked to respond to either semantic or perceptual ProM cues, which were embedded in a complex ongoing short term memory task. We manipulated absolute rather than relative importance by emphasizing the importance of the ProM task to half of the participants (i.e., without instructing them to prioritize it over the ongoing task). The results revealed that importance boosted ProM performance independent of the processing overlap between the ProM task and the ongoing task. Moreover, no additional cost was associated with absolute importance. These results challenge the view that importance always enhances the allocation of resources to the ProM task.
Resumo:
This report addresses speculative parallelism (the assignment of spare processing resources to tasks which are not known to be strictly required for the successful completion of a computation) at the user and application level. At this level, the execution of a program is seen as a (dynamic) tree —a graph, in general. A solution for a problem is a traversal of this graph from the initial state to a node known to be the answer. Speculative parallelism then represents the assignment of resources to múltiple branches of this graph even if they are not positively known to be on the path to a solution. In highly non-deterministic programs the branching factor can be very high and a naive assignment will very soon use up all the resources. This report presents work assignment strategies other than the usual depth-first and breadth-first. Instead, best-first strategies are used. Since their definition is application-dependent, the application language contains primitives that allow the user (or application programmer) to a) indícate when intelligent OR-parallelism should be used; b) provide the functions that define "best," and c) indícate when to use them. An abstract architecture enables those primitives to perform the search in a "speculative" way, using several processors, synchronizing them, killing the siblings of the path leading to the answer, etc. The user is freed from worrying about these interactions. Several search strategies are proposed and their implementation issues are addressed. "Armageddon," a global pruning method, is introduced, together with both a software and a hardware implementation for it. The concepts exposed are applicable to áreas of Artificial Intelligence such as extensive expert systems, planning, game playing, and in general to large search problems. The proposed strategies, although showing promise, have not been evaluated by simulation or experimentation.
Resumo:
In this work, a platform to the conditioning, digitizing, visualization and recording of the EMG signals was developed. After the acquisition, the analysis can be done by signal processing techniques. The platform consists of two modules witch acquire electromyography (EMG) signals by surface electrodes, limit the interest frequency band, filter the power grid interference and digitalize the signals by the analogue-to- digital converter of the modules microcontroller. Thereby, the data are sent to the computer by the USB interface by the HID specification, displayed in real-time in graphical form and stored in files. As processing resources was implemented the operations of signal absolute value, the determination of effective value (RMS), Fourier analysis, digital filter (IIR) and the adaptive filter. Platform initial tests were performed with signal of lower and upper limbs with the aim to compare the EMG signal laterality. The open platform is intended to educational activities and academic research, allowing the addition of other processing methods that the researcher want to evaluate or other required analysis.