969 resultados para Computer generated works
Resumo:
The question of the trainability of executive functions and the impact of such training on related cognitive skills has stirred considerable research interest. Despite a number of studies investigating this, the question has not yet been solved. The general aim of this thesis was to investigate two very different types of training of executive functions: laboratory-based computerized training (Studies I-III) and realworld training through bilingualism (Studies IV-V). Bilingualism as a kind of training of executive functions is based on the idea that managing two languages requires executive resources, and previous studies have suggested a bilingual advantage in executive functions. Three executive functions were studied in the present thesis: updating of working memory (WM) contents, inhibition of irrelevant information, and shifting between tasks and mental sets. Studies I-III investigated the effects of computer-based training of WM updating (Study I), inhibition (Study II), and set shifting (Study III) in healthy young adults. All studies showed increased performance on the trained task. More importantly, improvement on an untrained task tapping the trained executive function (near transfer) was seen in Study I and II. None of the three studies showed improvement on untrained tasks tapping some other cognitive function (far transfer) as a result of training. Study I also used PET to investigate the effects of WM updating training on a neurotransmitter closely linked to WM, namely dopamine. The PET results revealed increased striatal dopamine release during WM updating performance as a result of training. Study IV investigated the ability to inhibit task-irrelevant stimuli in bilinguals and monolinguals by using a dichotic listening task. The results showed that the bilinguals exceeded the monolinguals in inhibiting task-irrelevant information. Study V introduced a new, complementary research approach to study the bilingual executive advantage and its underlying mechanisms. To circumvent the methodological problems related to natural groups design, this approach focuses only on bilinguals and examines whether individual differences in bilingual behavior correlate with executive task performances. Using measures that tap the three above-entioned executive functions, the results suggested that more frequent language switching was associated with better set shifting skills, and earlier acquisition of the second language was related to better inhibition skills. In conclusion, the present behavioral results showed that computer-based training of executive functions can improve performance on the trained task and on closely related tasks, but does not yield a more general improvement of cognitive skills. Moreover, the functional neuroimaging results reveal that WM training modulates striatal dopaminergic function, speaking for training-induced neural plasticity in this important neurotransmitter system. With regard to bilingualism, the results provide further support to the idea that bilingualism can enhance executive functions. In addition, the new complementary research approach proposed here provides some clues as to which aspects of everyday bilingual behavior may be related to the advantage in executive functions in bilingual individuals.
Resumo:
Communication, the flow of ideas and information between individuals in a social context, is the heart of educational experience. Constructivism and constructivist theories form the foundation for the collaborative learning processes of creating and sharing meaning in online educational contexts. The Learning and Collaboration in Technology-enhanced Contexts (LeCoTec) course comprised of 66 participants drawn from four European universities (Oulu, Turku, Ghent and Ramon Llull). These participants were split into 15 groups with the express aim of learning about computer-supported collaborative learning (CSCL). The Community of Inquiry model (social, cognitive and teaching presences) provided the content and tools for learning and researching the collaborative interactions in this environment. The sampled comments from the collaborative phase were collected and analyzed at chain-level and group-level, with the aim of identifying the various message types that sustained high learning outcomes. Furthermore, the Social Network Analysis helped to view the density of whole group interactions, as well as the popular and active members within the highly collaborating groups. It was observed that long chains occur in groups having high quality outcomes. These chains were also characterized by Social, Interactivity, Administrative and Content comment-types. In addition, high outcomes were realized from the high interactive cases and high-density groups. In low interactive groups, commenting patterned around the one or two central group members. In conclusion, future online environments should support high-order learning and develop greater metacognition and self-regulation. Moreover, such an environment, with a wide variety of problem solving tools, would enhance interactivity.
Resumo:
This work presents the implementation and comparison of three different techniques of three-dimensional computer vision as follows: • Stereo vision - correlation between two 2D images • Sensorial fusion - use of different sensors: camera 2D + ultrasound sensor (1D); • Structured light The computer vision techniques herein presented took into consideration the following characteristics: • Computational effort ( elapsed time for obtain the 3D information); • Influence of environmental conditions (noise due to a non uniform lighting, overlighting and shades); • The cost of the infrastructure for each technique; • Analysis of uncertainties, precision and accuracy. The option of using the Matlab software, version 5.1, for algorithm implementation of the three techniques was due to the simplicity of their commands, programming and debugging. Besides, this software is well known and used by the academic community, allowing the results of this work to be obtained and verified. Examples of three-dimensional vision applied to robotic assembling tasks ("pick-and-place") are presented.
Resumo:
The acceleration of solar energetic particles (SEPs) by flares and coronal mass ejections (CMEs) has been a major topic of research for the solar-terrestrial physics and geophysics communities for decades. This thesis discusses theories describing first-order Fermi acceleration of SEPs through repeated crossings at a CME-driven shock. We propose that particle trapping occurs through self-generated Alfvén waves, leading to a turbulent trapping region in front of the shock. Decelerating coronal shocks are shown to be capable of efficient SEP acceleration, provided seed particle injection is sufficient. Quasi-parallel shocks are found to inject thermal particles with good efficiency. The roles of minimum injection velocities, cross-field diffusion, downstream scattering efficiency and cross-shock potential are investigated in detail, with downstream isotropisation timescales having a major effect on injection efficiency. Accelerated spectra of heavier elements up to iron are found to exhibit significantly harder spectra than protons. Accelerated spectra cut-off energies are found to scale proportional to (Q/A)1.5, which is explained through analysis of the spectral shape of amplified Alfvénic turbulence. Acceleration times to different threshold energies are found to be non-linear, indicating that self-consistent time-dependent simulations are required in order to expose the full extent of acceleration dynamics. The well-established quasilinear theory (QLT) of particle scattering is investigated by comparing QLT scattering coefficients with those found via full-orbit simulations. QLT is found to overemphasise resonance conditions. This finding supports the simplifications implemented in the presented coronal shock acceleration (CSA) simulation software. The CSA software package is used to simulate a range of acceleration scenarios. The results are found to be in agreement with well-established particle acceleration theory. At the same time, new spatial and temporal dynamics of particle population trapping and wave evolution are revealed.
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
The use of water-sensitive papers is an important tool for assessing the quality of pesticide application on crops, but manual analysis is laborious and time-consuming. Thus, this study aimed to evaluate and compare the results obtained from four software programs for spray droplet analysis in different scanned images of water-sensitive papers. After spraying, papers with four droplet deposition patterns (varying droplet spectra and densities) were analyzed manually and by means of the following computer programs: CIR, e-Sprinkle, DepositScan and Conta-Gotas. The diameter of the volume and number medians and the number of droplets per target area were studied. There is a strong correlation between the values measured using the different programs and the manual analysis, but there is a great difference between the numerical values measured for the same paper. Thus, it is not advisable to compare results obtained from different programs.
Resumo:
Esitys KDK-käytettävyystyöryhmän järjestämässä seminaarissa: Miten käyttäjien toiveet haastavat metatietokäytäntöjämme? / How users' expectations challenge our metadata practices? 30.9.2014.
Resumo:
A pulsatile pressure-flow model was developed for in vitro quantitative color Doppler flow mapping studies of valvular regurgitation. The flow through the system was generated by a piston which was driven by stepper motors controlled by a computer. The piston was connected to acrylic chambers designed to simulate "ventricular" and "atrial" heart chambers. Inside the "ventricular" chamber, a prosthetic heart valve was placed at the inflow connection with the "atrial" chamber while another prosthetic valve was positioned at the outflow connection with flexible tubes, elastic balloons and a reservoir arranged to mimic the peripheral circulation. The flow model was filled with a 0.25% corn starch/water suspension to improve Doppler imaging. A continuous flow pump transferred the liquid from the peripheral reservoir to another one connected to the "atrial" chamber. The dimensions of the flow model were designed to permit adequate imaging by Doppler echocardiography. Acoustic windows allowed placement of transducers distal and perpendicular to the valves, so that the ultrasound beam could be positioned parallel to the valvular flow. Strain-gauge and electromagnetic transducers were used for measurements of pressure and flow in different segments of the system. The flow model was also designed to fit different sizes and types of prosthetic valves. This pulsatile flow model was able to generate pressure and flow in the physiological human range, with independent adjustment of pulse duration and rate as well as of stroke volume. This model mimics flow profiles observed in patients with regurgitant prosthetic valves.
Resumo:
This thesis concentrates on the validation of a generic thermal hydraulic computer code TRACE under the challenges of the VVER-440 reactor type. The code capability to model the VVER-440 geometry and thermal hydraulic phenomena specific to this reactor design has been examined and demonstrated acceptable. The main challenge in VVER-440 thermal hydraulics appeared in the modelling of the horizontal steam generator. The major challenge here is not in the code physics or numerics but in the formulation of a representative nodalization structure. Another VVER-440 specialty, the hot leg loop seals, challenges the system codes functionally in general, but proved readily representable. Computer code models have to be validated against experiments to achieve confidence in code models. When new computer code is to be used for nuclear power plant safety analysis, it must first be validated against a large variety of different experiments. The validation process has to cover both the code itself and the code input. Uncertainties of different nature are identified in the different phases of the validation procedure and can even be quantified. This thesis presents a novel approach to the input model validation and uncertainty evaluation in the different stages of the computer code validation procedure. This thesis also demonstrates that in the safety analysis, there are inevitably significant uncertainties that are not statistically quantifiable; they need to be and can be addressed by other, less simplistic means, ultimately relying on the competence of the analysts and the capability of the community to support the experimental verification of analytical assumptions. This method completes essentially the commonly used uncertainty assessment methods, which are usually conducted using only statistical methods.
Resumo:
Nowadays, computer-based systems tend to become more complex and control increasingly critical functions affecting different areas of human activities. Failures of such systems might result in loss of human lives as well as significant damage to the environment. Therefore, their safety needs to be ensured. However, the development of safety-critical systems is not a trivial exercise. Hence, to preclude design faults and guarantee the desired behaviour, different industrial standards prescribe the use of rigorous techniques for development and verification of such systems. The more critical the system is, the more rigorous approach should be undertaken. To ensure safety of a critical computer-based system, satisfaction of the safety requirements imposed on this system should be demonstrated. This task involves a number of activities. In particular, a set of the safety requirements is usually derived by conducting various safety analysis techniques. Strong assurance that the system satisfies the safety requirements can be provided by formal methods, i.e., mathematically-based techniques. At the same time, the evidence that the system under consideration meets the imposed safety requirements might be demonstrated by constructing safety cases. However, the overall safety assurance process of critical computerbased systems remains insufficiently defined due to the following reasons. Firstly, there are semantic differences between safety requirements and formal models. Informally represented safety requirements should be translated into the underlying formal language to enable further veri cation. Secondly, the development of formal models of complex systems can be labour-intensive and time consuming. Thirdly, there are only a few well-defined methods for integration of formal verification results into safety cases. This thesis proposes an integrated approach to the rigorous development and verification of safety-critical systems that (1) facilitates elicitation of safety requirements and their incorporation into formal models, (2) simplifies formal modelling and verification by proposing specification and refinement patterns, and (3) assists in the construction of safety cases from the artefacts generated by formal reasoning. Our chosen formal framework is Event-B. It allows us to tackle the complexity of safety-critical systems as well as to structure safety requirements by applying abstraction and stepwise refinement. The Rodin platform, a tool supporting Event-B, assists in automatic model transformations and proof-based verification of the desired system properties. The proposed approach has been validated by several case studies from different application domains.
Resumo:
The aim of the present study was to measure full epidermal thickness, stratum corneum thickness, rete length, dermal papilla widening and suprapapillary epidermal thickness in psoriasis patients using a light microscope and computer-supported image analysis. The data obtained were analyzed in terms of patient age, type of psoriasis, total body surface area involvement, scalp and nail involvement, duration of psoriasis, and family history of the disease. The study was conducted on 64 patients and 57 controls whose skin biopsies were examined by light microscopy. The acquired microscopic images were transferred to a computer and measurements were made using image analysis. The skin biopsies, taken from different body areas, were examined for different parameters such as epidermal, corneal and suprapapillary epidermal thickness. The most prominent increase in thickness was detected in the palmar region. Corneal thickness was more pronounced in patients with scalp involvement than in patients without scalp involvement (t = -2.651, P = 0.008). The most prominent increase in rete length was observed in the knees (median: 491 µm, t = 10.117, P = 0.000). The difference in rete length between patients with a positive and a negative family history was significant (t = -3.334, P = 0.03), being 27% greater in psoriasis patients without a family history. The differences in dermal papilla distances among patients were very small. We conclude that microscope-supported thickness measurements provide objective results.
Resumo:
Large-scale genome projects have generated a rapidly increasing number of DNA sequences. Therefore, development of computational methods to rapidly analyze these sequences is essential for progress in genomic research. Here we present an automatic annotation system for preliminary analysis of DNA sequences. The gene annotation tool (GATO) is a Bioinformatics pipeline designed to facilitate routine functional annotation and easy access to annotated genes. It was designed in view of the frequent need of genomic researchers to access data pertaining to a common set of genes. In the GATO system, annotation is generated by querying some of the Web-accessible resources and the information is stored in a local database, which keeps a record of all previous annotation results. GATO may be accessed from everywhere through the internet or may be run locally if a large number of sequences are going to be annotated. It is implemented in PHP and Perl and may be run on any suitable Web server. Usually, installation and application of annotation systems require experience and are time consuming, but GATO is simple and practical, allowing anyone with basic skills in informatics to access it without any special training. GATO can be downloaded at [http://mariwork.iq.usp.br/gato/]. Minimum computer free space required is 2 MB.
Resumo:
Brain computer interface (BCI) is a kind of human machine interface, which provides a new interaction method between human and computer or other equipment. The most significant characteristic of BCI system is that its control input is brain electrical activities acquired from the brain instead of traditional input such as hands or eyes. BCI technique has rapidly developed during last two decades and it has mainly worked as an auxiliary technique to help the disable people improve their life qualities. With the appearance of low cost novel electrical devices such as EMOTIV, BCI technique has been applied to the general public through many useful applications including video gaming, virtual reality and virtual keyboard. The purpose of this research is to be familiar with EMOTIV EPOC system and make use of it to build an EEG based BCI system for controlling an industrial manipulator by means of human thought. To build a BCI system, an acquisition program based on EMOTIV EPOC system is designed and a MFC based dialog that works as an operation panel is presented. Furthermore, the inverse kinematics of RV-3SB industrial robot was solved. In the last part of this research, the designed BCI system with human thought input is examined and the results indicate that the system is running smoothly and displays clearly the motion type and the incremental displacement of the motion.
Resumo:
Hur arbetar en framgångsrik programmerare? Uppgifterna att programmera datorspel och att programmera industriella, säkerhetskritiska system verkar tämligen olika. Genom en noggrann empirisk undersökning jämför och kontrasterar avhandlingen dessa två former av programmering och visar att programmering innefattar mer än teknisk förmåga. Med utgångspunkt i hermeneutisk och retorisk teori och med hjälp av både kulturvetenskap och datavetenskap visar avhandlingen att programmerarnas tradition och värderingar är grundläggande för deras arbete, och att båda sorter av programmering kan uppfattas och analyseras genom klassisk texttolkningstradition. Dessutom kan datorprogram betraktas och analyseras med hjälp av klassiska teorier om talproduktion i praktiken - program ses då i detta sammanhang som ett slags yttranden. Allt som allt förespråkar avhandlingen en återkomst till vetenskapens grunder, vilka innebär en ständig och oupphörlig cyklisk rörelse mellan att erfara och att förstå. Detta står i kontrast till en reduktionistisk syn på vetenskapen, som skiljer skarpt mellan subjektivt och objektivt, och på så sätt utgår från möjligheten att uppnå fullständigt vetande. Ofullständigt vetande är tolkandets och hermeneutikens domän. Syftet med avhandlingen är att med hjälp av exempel demonstrera programmeringens kulturella, hermeneutiska och retoriska natur.
Resumo:
This study compared the effectiveness of the multifocal visual evoked cortical potentials (mfVEP) elicited by pattern pulse stimulation with that of pattern reversal in producing reliable responses (signal-to-noise ratio >1.359). Participants were 14 healthy subjects. Visual stimulation was obtained using a 60-sector dartboard display consisting of 6 concentric rings presented in either pulse or reversal mode. Each sector, consisting of 16 checks at 99% Michelson contrast and 80 cd/m² mean luminance, was controlled by a binary m-sequence in the time domain. The signal-to-noise ratio was generally larger in the pattern reversal than in the pattern pulse mode. The number of reliable responses was similar in the central sectors for the two stimulation modes. At the periphery, pattern reversal showed a larger number of reliable responses. Pattern pulse stimuli performed similarly to pattern reversal stimuli to generate reliable waveforms in R1 and R2. The advantage of using both protocols to study mfVEP responses is their complementarity: in some patients, reliable waveforms in specific sectors may be obtained with only one of the two methods. The joint analysis of pattern reversal and pattern pulse stimuli increased the rate of reliability for central sectors by 7.14% in R1, 5.35% in R2, 4.76% in R3, 3.57% in R4, 2.97% in R5, and 1.78% in R6. From R1 to R4 the reliability to generate mfVEPs was above 70% when using both protocols. Thus, for a very high reliability and thorough examination of visual performance, it is recommended to use both stimulation protocols.