914 resultados para Computer Supported Cooperative Work (CSCW)
Resumo:
Following their detection and seizure by police and border guard authorities, false identity and travel documents are usually scanned, producing digital images. This research investigates the potential of these images to classify false identity documents, highlight links between documents produced by a same modus operandi or same source, and thus support forensic intelligence efforts. Inspired by previous research work about digital images of Ecstasy tablets, a systematic and complete method has been developed to acquire, collect, process and compare images of false identity documents. This first part of the article highlights the critical steps of the method and the development of a prototype that processes regions of interest extracted from images. Acquisition conditions have been fine-tuned in order to optimise reproducibility and comparability of images. Different filters and comparison metrics have been evaluated and the performance of the method has been assessed using two calibration and validation sets of documents, made up of 101 Italian driving licenses and 96 Portuguese passports seized in Switzerland, among which some were known to come from common sources. Results indicate that the use of Hue and Edge filters or their combination to extract profiles from images, and then the comparison of profiles with a Canberra distance-based metric provides the most accurate classification of documents. The method appears also to be quick, efficient and inexpensive. It can be easily operated from remote locations and shared amongst different organisations, which makes it very convenient for future operational applications. The method could serve as a first fast triage method that may help target more resource-intensive profiling methods (based on a visual, physical or chemical examination of documents for instance). Its contribution to forensic intelligence and its application to several sets of false identity documents seized by police and border guards will be developed in a forthcoming article (part II).
Resumo:
In order to develop applications for z;isual interpretation of medical images, the early detection and evaluation of microcalcifications in digital mammograms is verg important since their presence is oftenassociated with a high incidence of breast cancers. Accurate classification into benign and malignant groups would help improve diagnostic sensitivity as well as reduce the number of unnecessa y biopsies. The challenge here is the selection of the useful features to distinguish benign from malignant micro calcifications. Our purpose in this work is to analyse a microcalcification evaluation method based on a set of shapebased features extracted from the digitised mammography. The segmentation of the microcalcificationsis performed using a fixed-tolerance region growing method to extract boundaries of calcifications with manually selected seed pixels. Taking into account that shapes and sizes of clustered microcalcificationshave been associated with a high risk of carcinoma based on digerent subjective measures, such as whether or not the calcifications are irregular, linear, vermiform, branched, rounded or ring like, our efforts were addressed to obtain a feature set related to the shape. The identification of the pammeters concerning the malignant character of the microcalcifications was performed on a set of 146 mammograms with their real diagnosis known in advance from biopsies. This allowed identifying the following shape-based parameters as the relevant ones: Number of clusters, Number of holes, Area, Feret elongation, Roughness, and Elongation. Further experiments on a set of 70 new mammogmms showed that the performance of the classification scheme is close to the mean performance of three expert radiologists, which allows to consider the proposed method for assisting the diagnosis and encourages to continue the investigation in the senseof adding new features not only related to the shape
Resumo:
In this work is presented and tested (for 106 adducts, mainly of the zinc group halides) two empirical equations supported in TG data to estimate the value of the metal-ligand bond dissociation enthalpy for adducts: <D> (M-O) = t i / g if t i < 420 K and <D> (M-O) = (t i / g ) - 7,75 . 10-2 . t i if t i > 420 K. In this empirical equations, t i is the thermodynamic temperature of the beginning of the thermal decomposition of the adduct, as determined by thermogravimetry, andg is a constant factor that is function of the metal halide considered and of the number of ligands, but is not dependant of the ligand itself. To half of the tested adducts the difference between experimental and calculated values was less than 5%. To about 80% of the tested adducts, the difference between the experimental (calorimetric) and the calculated (using the proposed equations) values are less than 15%.
Resumo:
Virtual screening is a central technique in drug discovery today. Millions of molecules can be tested in silico with the aim to only select the most promising and test them experimentally. The topic of this thesis is ligand-based virtual screening tools which take existing active molecules as starting point for finding new drug candidates. One goal of this thesis was to build a model that gives the probability that two molecules are biologically similar as function of one or more chemical similarity scores. Another important goal was to evaluate how well different ligand-based virtual screening tools are able to distinguish active molecules from inactives. One more criterion set for the virtual screening tools was their applicability in scaffold-hopping, i.e. finding new active chemotypes. In the first part of the work, a link was defined between the abstract chemical similarity score given by a screening tool and the probability that the two molecules are biologically similar. These results help to decide objectively which virtual screening hits to test experimentally. The work also resulted in a new type of data fusion method when using two or more tools. In the second part, five ligand-based virtual screening tools were evaluated and their performance was found to be generally poor. Three reasons for this were proposed: false negatives in the benchmark sets, active molecules that do not share the binding mode, and activity cliffs. In the third part of the study, a novel visualization and quantification method is presented for evaluation of the scaffold-hopping ability of virtual screening tools.
Resumo:
The focus of the present work was on 10- to 12-year-old elementary school students’ conceptual learning outcomes in science in two specific inquiry-learning environments, laboratory and simulation. The main aim was to examine if it would be more beneficial to combine than contrast simulation and laboratory activities in science teaching. It was argued that the status quo where laboratories and simulations are seen as alternative or competing methods in science teaching is hardly an optimal solution to promote students’ learning and understanding in various science domains. It was hypothesized that it would make more sense and be more productive to combine laboratories and simulations. Several explanations and examples were provided to back up the hypothesis. In order to test whether learning with the combination of laboratory and simulation activities can result in better conceptual understanding in science than learning with laboratory or simulation activities alone, two experiments were conducted in the domain of electricity. In these experiments students constructed and studied electrical circuits in three different learning environments: laboratory (real circuits), simulation (virtual circuits), and simulation-laboratory combination (real and virtual circuits were used simultaneously). In order to measure and compare how these environments affected students’ conceptual understanding of circuits, a subject knowledge assessment questionnaire was administered before and after the experimentation. The results of the experiments were presented in four empirical studies. Three of the studies focused on learning outcomes between the conditions and one on learning processes. Study I analyzed learning outcomes from experiment I. The aim of the study was to investigate if it would be more beneficial to combine simulation and laboratory activities than to use them separately in teaching the concepts of simple electricity. Matched-trios were created based on the pre-test results of 66 elementary school students and divided randomly into a laboratory (real circuits), simulation (virtual circuits) and simulation-laboratory combination (real and virtual circuits simultaneously) conditions. In each condition students had 90 minutes to construct and study various circuits. The results showed that studying electrical circuits in the simulation–laboratory combination environment improved students’ conceptual understanding more than studying circuits in simulation and laboratory environments alone. Although there were no statistical differences between simulation and laboratory environments, the learning effect was more pronounced in the simulation condition where the students made clear progress during the intervention, whereas in the laboratory condition students’ conceptual understanding remained at an elementary level after the intervention. Study II analyzed learning outcomes from experiment II. The aim of the study was to investigate if and how learning outcomes in simulation and simulation-laboratory combination environments are mediated by implicit (only procedural guidance) and explicit (more structure and guidance for the discovery process) instruction in the context of simple DC circuits. Matched-quartets were created based on the pre-test results of 50 elementary school students and divided randomly into a simulation implicit (SI), simulation explicit (SE), combination implicit (CI) and combination explicit (CE) conditions. The results showed that when the students were working with the simulation alone, they were able to gain significantly greater amount of subject knowledge when they received metacognitive support (explicit instruction; SE) for the discovery process than when they received only procedural guidance (implicit instruction: SI). However, this additional scaffolding was not enough to reach the level of the students in the combination environment (CI and CE). A surprising finding in Study II was that instructional support had a different effect in the combination environment than in the simulation environment. In the combination environment explicit instruction (CE) did not seem to elicit much additional gain for students’ understanding of electric circuits compared to implicit instruction (CI). Instead, explicit instruction slowed down the inquiry process substantially in the combination environment. Study III analyzed from video data learning processes of those 50 students that participated in experiment II (cf. Study II above). The focus was on three specific learning processes: cognitive conflicts, self-explanations, and analogical encodings. The aim of the study was to find out possible explanations for the success of the combination condition in Experiments I and II. The video data provided clear evidence about the benefits of studying with the real and virtual circuits simultaneously (the combination conditions). Mostly the representations complemented each other, that is, one representation helped students to interpret and understand the outcomes they received from the other representation. However, there were also instances in which analogical encoding took place, that is, situations in which the slightly discrepant results between the representations ‘forced’ students to focus on those features that could be generalised across the two representations. No statistical differences were found in the amount of experienced cognitive conflicts and self-explanations between simulation and combination conditions, though in self-explanations there was a nascent trend in favour of the combination. There was also a clear tendency suggesting that explicit guidance increased the amount of self-explanations. Overall, the amount of cognitive conflicts and self-explanations was very low. The aim of the Study IV was twofold: the main aim was to provide an aggregated overview of the learning outcomes of experiments I and II; the secondary aim was to explore the relationship between the learning environments and students’ prior domain knowledge (low and high) in the experiments. Aggregated results of experiments I & II showed that on average, 91% of the students in the combination environment scored above the average of the laboratory environment, and 76% of them scored also above the average of the simulation environment. Seventy percent of the students in the simulation environment scored above the average of the laboratory environment. The results further showed that overall students seemed to benefit from combining simulations and laboratories regardless of their level of prior knowledge, that is, students with either low or high prior knowledge who studied circuits in the combination environment outperformed their counterparts who studied in the laboratory or simulation environment alone. The effect seemed to be slightly bigger among the students with low prior knowledge. However, more detailed inspection of the results showed that there were considerable differences between the experiments regarding how students with low and high prior knowledge benefitted from the combination: in Experiment I, especially students with low prior knowledge benefitted from the combination as compared to those students that used only the simulation, whereas in Experiment II, only students with high prior knowledge seemed to benefit from the combination relative to the simulation group. Regarding the differences between simulation and laboratory groups, the benefits of using a simulation seemed to be slightly higher among students with high prior knowledge. The results of the four empirical studies support the hypothesis concerning the benefits of using simulation along with laboratory activities to promote students’ conceptual understanding of electricity. It can be concluded that when teaching students about electricity, the students can gain better understanding when they have an opportunity to use the simulation and the real circuits in parallel than if they have only the real circuits or only a computer simulation available, even when the use of the simulation is supported with the explicit instruction. The outcomes of the empirical studies can be considered as the first unambiguous evidence on the (additional) benefits of combining laboratory and simulation activities in science education as compared to learning with laboratories and simulations alone.
Resumo:
The results of a numerical study of premixed Hydrogen-air flows ignition by an oblique shock wave (OSW) stabilized by a wedge are presented, in situations when initial and boundary conditions are such that transition between the initial OSW and an oblique detonation wave (ODW) is observed. More precisely, the objectives of the paper are: (i) to identify the different possible structures of the transition region that exist between the initial OSW and the resulting ODW and (ii) to evidence the effect on the ODW of an abrupt decrease of the wedge angle in such a way that the final part of the wedge surface becomes parallel to the initial flow. For such a geometrical configuration and for the initial and boundary conditions considered, the overdriven detonation supported by the initial wedge angle is found to relax towards a Chapman-Jouguet detonation in the region where the wedge surface is parallel to the initial flow. Computations are performed using an adaptive, unstructured grid, finite volume computer code previously developed for the sake of the computations of high speed, compressible flows of reactive gas mixtures. Physico-chemical properties are functions of the local mixture composition, temperature and pressure, and they are computed using the CHEMKIN-II subroutines.
Resumo:
This work presents the implementation and comparison of three different techniques of three-dimensional computer vision as follows: • Stereo vision - correlation between two 2D images • Sensorial fusion - use of different sensors: camera 2D + ultrasound sensor (1D); • Structured light The computer vision techniques herein presented took into consideration the following characteristics: • Computational effort ( elapsed time for obtain the 3D information); • Influence of environmental conditions (noise due to a non uniform lighting, overlighting and shades); • The cost of the infrastructure for each technique; • Analysis of uncertainties, precision and accuracy. The option of using the Matlab software, version 5.1, for algorithm implementation of the three techniques was due to the simplicity of their commands, programming and debugging. Besides, this software is well known and used by the academic community, allowing the results of this work to be obtained and verified. Examples of three-dimensional vision applied to robotic assembling tasks ("pick-and-place") are presented.
Resumo:
The evolution of our society is impossible without a constant progress in life-important areas such as chemical engineering and technology. Innovation, creativity and technology are three main components driving the progress of chemistry further towards a sustainable society. Biomass, being an attractive renewable feedstock for production of fine chemicals, energy-rich materials and even transportation fuels, captures progressively new positions in the area of chemical technology. Knowledge of heterogeneous catalysis and chemical technology applied to transformation of biomass-derived substances will open doors for a sustainable economy and facilitates the discovery of novel environmentally-benign processes which probably will replace existing technologies in the era of biorefinary. Aqueous-phase reforming (APR) is regarded as a promising technology for production of hydrogen and liquids fuels from biomass-derived substances such as C3-C6 polyols. In the present work, aqueous-phase reforming of glycerol, xylitol and sorbitol was investigated in the presence of supported Pt catalysts. The catalysts were deposited on different support materials, including Al2O3, TiO2 and carbons. Catalytic measurements were performed in a laboratory-scale continuous fixedbed reactor. An advanced analytical approach was developed in order to identify reaction products and reaction intermediates in the APR of polyols. The influence of the substrate structure on the product formation and selectivity in the APR reaction was also investigated, showing that the yields of the desired products varied depending on the substrate chain length. Additionally, the influence of bioethanol additive in the APR of glycerol and sorbitol was studied. A reaction network was advanced explaining the formation of products and key intermediates. The structure sensitivity in the aqueous-phase reforming reaction was demonstrated using a series of platinum catalysts supported on carbon with different Pt cluster sizes in the continuous fixed-bed reactor. Furthermore, a correlation between texture physico-chemical properties of the catalysts and catalytic data was established. The effect of the second metal (Re, Cu) addition to Pt catalysts was investigated in the APR of xylitol showing a superior hydrocarbon formation on PtRe bimetallic catalysts compared to monometallic Pt. On the basis of the experimental data obtained, mathematical modeling of the reaction kinetics was performed. The developed model was proven to successfully describe experimental data on APR of sorbitol with good accuracy.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
As the world becomes more technologically advanced and economies become globalized, computer science evolution has become faster than ever before. With this evolution and globalization come the need for sustainable university curricula that adequately prepare graduates for life in the industry. Additionally, behavioural skills or “soft” skills have become just as important as technical abilities and knowledge or “hard” skills. The objective of this study was to investigate the current skill gap that exists between computer science university graduates and actual industry needs as well as the sustainability of current computer science university curricula by conducting a systematic literature review of existing publications on the subject as well as a survey of recently graduated computer science students and their work supervisors. A quantitative study was carried out with respondents from six countries, mainly Finland, 31 of the responses came from recently graduated computer science professionals and 18 from their employers. The observed trends suggest that a skill gap really does exist particularly with “soft” skills and that many companies are forced to provide additional training to newly graduated employees if they are to be successful at their jobs.
Resumo:
Many-core systems provide a great potential in application performance with the massively parallel structure. Such systems are currently being integrated into most parts of daily life from high-end server farms to desktop systems, laptops and mobile devices. Yet, these systems are facing increasing challenges such as high temperature causing physical damage, high electrical bills both for servers and individual users, unpleasant noise levels due to active cooling and unrealistic battery drainage in mobile devices; factors caused directly by poor energy efficiency. Power management has traditionally been an area of research providing hardware solutions or runtime power management in the operating system in form of frequency governors. Energy awareness in application software is currently non-existent. This means that applications are not involved in the power management decisions, nor does any interface between the applications and the runtime system to provide such facilities exist. Power management in the operating system is therefore performed purely based on indirect implications of software execution, usually referred to as the workload. It often results in over-allocation of resources, hence power waste. This thesis discusses power management strategies in many-core systems in the form of increasing application software awareness of energy efficiency. The presented approach allows meta-data descriptions in the applications and is manifested in two design recommendations: 1) Energy-aware mapping 2) Energy-aware execution which allow the applications to directly influence the power management decisions. The recommendations eliminate over-allocation of resources and increase the energy efficiency of the computing system. Both recommendations are fully supported in a provided interface in combination with a novel power management runtime system called Bricktop. The work presented in this thesis allows both new- and legacy software to execute with the most energy efficient mapping on a many-core CPU and with the most energy efficient performance level. A set of case study examples demonstrate realworld energy savings in a wide range of applications without performance degradation.
Resumo:
Today, the user experience and usability in software application are becoming a major design issue due to the adaptation of many processes using new technologies. Therefore, the study of the user experience and usability might be included in every software development project and, thus, they should be tested to get traceable results. As a result of different testing methods to evaluate the concepts, a non-expert on the topic might have doubts on which option he/she should opt for and how to interpret the outcomes of the process. This work aims to create a process to ease the whole testing methodology based on the process created by Seffah et al. and a supporting software tool to follow the procedure of these testing methods for the user experience and usability.
Resumo:
This experimental study examined the effects of cooperative learning and expliciUimpliGit instruction on student achievement and attitudes toward working in cooperative groups. Specifically, fourth- and fifth-grade students (n=48) were randomly assigned to two conditions: cooperative learning with explicit instruction and cooperative learning with implicit instruction. All participants were given initial training either explicitly or implicitly in cooperative learning procedures via 10 one-hour sessions. Following the instruction period, all students participated in completing a group project related to a famous artists unit. It was hypothesized that the explicit instruction training would enhance students' scores on the famous artists test and the group projects, as well as improve students' attitudes toward cooperative learning. Although the explicit training group did not achieve significantly higher scores on the famous artists test, significant differences were found in group project results between the explicit and implicit groups. The explicit group also exhibited more favourable and positive attitudes toward cooperative learning. The findings of this study demonstrate that combining cooperative learning with explicit instruction is an effective classroom strategy and a useful practice for presenting and learning new information, as well as working in groups with success.
Resumo:
This study compared approximately 50 grade 12 students studying In th~ co-operative education mode with approximately 50 grade 12 students studying in a traditional English course. Measures of self-esteem, locus of control and work habits were compared before and at the conclusion of one semester's involvement in the different programs. Using Coopersmith's Self-Esteem Inventory, the students who had chosen to study in the co-operative education mode scored significantly higher than the students in the traditional course. At the end of the semester, the co-operative education students' scores remained significantly higher than the English students'. Although the test showed no sjgnifi~ant changes in self-esteem. anecdotal reports indicated that co-operative education students had increased self-esteem over the semester. No significant differences in locus of control were observed between the two groups at any time. Significant differences in work habits were observed. While both groups had the same number of absences and the same marks before taking these courses, students who were involved in co-operative education had significantly fewer absences and significantly higher marks than the students studying in the traditional course. Anecdotal reports also indicated an improv~ment in work habits for students who had been involved in co-operative education. Recommendations of the study are for further research to determine more exactly how self-esteem and work habits develop in co-operative education students. Also. students. parents, teachers. and administrators need to be made aware of the success of this program.
Resumo:
This experimental study examined the effects of cooperative learning and a question-answering strategy called elaborative interrogation ("Why is this fact true?") on the learning of factual information about familiar animals. Retention gains were compared across four study conditions: elaborative-interrogation-plus-cooperative learning, cooperative-learning, elaborative-interrogation, and reading-control. Sixth-grade students (n=68) were randomly assigned to the four conditions. All participants were given initial training and practice in cooperative learning procedures via three 45-minute sessions. After studying 36 facts about six animals, students' retention gains were measured via immediate free recall, immediate matched association, 30-day, and GO-day matched association tests. A priori comparisons were made to analyze the data. For immediate free recall and immediate matched association, significant differences were found between students in the three experimental conditions versus those in the control condition. Elaborative-interrogation and elaborativeinterrogation- plus-cooperative-learning also promoted longterm retention (measured via 30-day matched association) of the material relative to repetitive reading with elaborative-interrogation promoting the most durable gains (measured via GO-day matched association). The relationship between the types of elaborative responses and probability of subsequent retention was also examined. Even when students were unable to provide adequate answers to the why questions, learning was facilitated more so than repetitive reading. In general, generation of adequate elaborations was associated with greater probability of recall than was provision of inadequate answers. The findings of the study demonstrate that cooperative learning and the use of elaborative interrogation, both individually and collaboratively, are effective classroom procedures for facilitating children's learning of new information.