986 resultados para Computer users
Resumo:
Communication, the flow of ideas and information between individuals in a social context, is the heart of educational experience. Constructivism and constructivist theories form the foundation for the collaborative learning processes of creating and sharing meaning in online educational contexts. The Learning and Collaboration in Technology-enhanced Contexts (LeCoTec) course comprised of 66 participants drawn from four European universities (Oulu, Turku, Ghent and Ramon Llull). These participants were split into 15 groups with the express aim of learning about computer-supported collaborative learning (CSCL). The Community of Inquiry model (social, cognitive and teaching presences) provided the content and tools for learning and researching the collaborative interactions in this environment. The sampled comments from the collaborative phase were collected and analyzed at chain-level and group-level, with the aim of identifying the various message types that sustained high learning outcomes. Furthermore, the Social Network Analysis helped to view the density of whole group interactions, as well as the popular and active members within the highly collaborating groups. It was observed that long chains occur in groups having high quality outcomes. These chains were also characterized by Social, Interactivity, Administrative and Content comment-types. In addition, high outcomes were realized from the high interactive cases and high-density groups. In low interactive groups, commenting patterned around the one or two central group members. In conclusion, future online environments should support high-order learning and develop greater metacognition and self-regulation. Moreover, such an environment, with a wide variety of problem solving tools, would enhance interactivity.
Resumo:
This work presents the implementation and comparison of three different techniques of three-dimensional computer vision as follows: • Stereo vision - correlation between two 2D images • Sensorial fusion - use of different sensors: camera 2D + ultrasound sensor (1D); • Structured light The computer vision techniques herein presented took into consideration the following characteristics: • Computational effort ( elapsed time for obtain the 3D information); • Influence of environmental conditions (noise due to a non uniform lighting, overlighting and shades); • The cost of the infrastructure for each technique; • Analysis of uncertainties, precision and accuracy. The option of using the Matlab software, version 5.1, for algorithm implementation of the three techniques was due to the simplicity of their commands, programming and debugging. Besides, this software is well known and used by the academic community, allowing the results of this work to be obtained and verified. Examples of three-dimensional vision applied to robotic assembling tasks ("pick-and-place") are presented.
Resumo:
The horse industry is in many ways still operating the same way as it did in the beginning of the 20th century. At the same time the role of the horse has changed dramatically, from a beast of burden to a top athlete, a production animal or a beloved pet. A racehorse or an equestrian sport horse is trained and taken care of like any other athlete, but unlike its human counterpart, it might end up on our plate. According to European and many other countries’ laws, a horse is a production animal. The medical data of a horse should be known if it is to be slaughtered, to ensure that the meat is safe for human consumption. Today this vital medical information should be noted in the horse’s passport, but this paperbased system is not reliable. If a horse gets sold, depending on the country’s laws, the medical records might not be transferred to the new owner, the horse’s passport might get lost etc. Thus the system is not fool proof. It is not only the horse owners who have to struggle with paperwork; veterinarians as well as other officials often use much time on redundant paperwork. The main research question of this thesis is if IS could be used to help the different stakeholders within the horse industry? Veterinarians in particular who travel to stables to treat horses cannot always take with them their computers, since the somewhat unsanitary environment is not suitable for a sensitive technological device. Currently there is no common medical database developed for horses, although such a database with a support system could help with many problems. These include vaccination and disease control, food-safety, as well as export and import problems. The main stakeholders within the horse industry, including equine veterinarians and horse owners, were studied to find out their daily routines and needs for a possible support system. The research showed that there are different aspects within the horse industry where IS could be used to support the stakeholders daily routines. Thus a support system including web and mobile accessibility for the main stakeholders is under development. Since veterinarians will be the main users of this support system, it is very important to make sure that they find it useful and beneficial in their daily work. To ensure a desired result, the research and development of the system has been done iteratively with the stakeholders following the Action Design Research methodology.
Resumo:
Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.
Resumo:
Social media has become a part of many people’s everyday lives. In the library field the adoption of social media has been widespread and discussions of the development of “Library 2.0” began at an early stage. The aim with this thesis is to study the interface between public libraries, social media, and users, focusing on information activities. The main research question is: How is the interface between public libraries and social media perceived and acted upon by its main stakeholders (library professionals and users)? The background of Library 2.0 is strongly associated with the development of the Web and social media, as well as with the public libraries and their user-centered and information technological development. The theoretical framework builds on the research within the area of Library and Information Science concerning information behavior, information practice, and information activities. Earlier research on social media and public libraries is also highlighted in this thesis. The methods survey and content analysis were applied to map the interface between social media and public libraries. A questionnaire was handed out to the users and another questionnaire was sent out to the library professionals. The results were statistically analyzed. In the content analysis public library Facebook pages were studied. All the empirical investigations were conducted in the area of Finland Proper. An integrated analysis of the results deepens the understanding of the key elements of the social media and public library context. These elements are interactivity, information activities, perceptions, and stakeholders. In this context seven information activities were distinguished: reading, seeking, creating, communicating, informing, mediating, and contributing. This thesis contributes to develop the research concerning information activities and draws a realistic picture of the challenges and opportunities in the social media and public library context. It also contributes with knowledge on library professionals and library users, and the existing differences in their perceptions of the interface between libraries and social media.
Resumo:
End-user development is a very common but often largely overlooked phenomenon in information systems research and practice. End-user development means that regular people, the end-users of software, and not professional developers are doing software development. A large number of people are directly or indirectly impacted by the results of these non-professional development activities. The numbers of users performing end-user development activities are difficult to ascertain precisely. But it is very large, and still growing. Computer adoption is growing towards 100% and many new types of computational devices are continually introduced. In addition, other devices not previously programmable are becoming so. This means that, at this very moment, hundreds of millions of people are likely struggling with development problems. Furthermore, software itself is continually being adapted for more flexibility, enabling users to change the behaviour of their software themselves. New software and services are helping to transform users from consumers to producers. Much of this is now found on-line. The problem for the end-user developer is that little of this development is supported by anyone. Often organisations do not notice end-user development and consequently neither provide support for it, nor are equipped to be able to do so. Many end-user developers do not belong to any organisation at all. Also, the end-user development process may be aggravating the problem. End-users are usually not really committed to the development process, which tends to be more iterative and ad hoc. This means support becomes a distant third behind getting the job done and figuring out the development issues to get the job done. Sometimes the software itself may exacerbate the issue by simplifying the development process, deemphasising the difficulty of the task being undertaken. On-line support could be the lifeline the end-user developer needs. Going online one can find all the knowledge one could ever need. However, that does still not help the end-user apply this information or knowledge in practice. A virtual community, through its ability to adopt the end-user’s specific context, could surmount this final obstacle. This thesis explores the concept of end-user development and how it could be supported through on-line sources, in particular virtual communities, which it is argued here, seem to fit the end-user developer’s needs very well. The experiences of real end-user developers and prior literature were used in this process. Emphasis has been on those end-user developers, e.g. small business owners, who may have literally nowhere to turn to for support. Adopting the viewpoint of the end-user developer, the thesis examines the question of how an end-user could use a virtual community effectively, improving the results of the support process. Assuming the common situation where the demand for support outstrips the supply.
Resumo:
A web service is a software system that provides a machine-processable interface to the other machines over the network using different Internet protocols. They are being increasingly used in the industry in order to automate different tasks and offer services to a wider audience. The REST architectural style aims at producing scalable and extensible web services using technologies that play well with the existing tools and infrastructure of the web. It provides a uniform set of operation that can be used to invoke a CRUD interface (create, retrieve, update and delete) of a web service. The stateless behavior of the service interface requires that every request to a resource is independent of the previous ones facilitating scalability. Automated systems, e.g., hotel reservation systems, provide advanced scenarios for stateful services that require a certain sequence of requests that must be followed in order to fulfill the service goals. Designing and developing such services for advanced scenarios with REST constraints require rigorous approaches that are capable of creating web services that can be trusted for their behavior. Systems that can be trusted for their behavior can be termed as dependable systems. This thesis presents an integrated design, analysis and validation approach that facilitates the service developer to create dependable and stateful REST web services. The main contribution of this thesis is that we provide a novel model-driven methodology to design behavioral REST web service interfaces and their compositions. The behavioral interfaces provide information on what methods can be invoked on a service and the pre- and post-conditions of these methods. The methodology uses Unified Modeling Language (UML), as the modeling language, which has a wide user base and has mature tools that are continuously evolving. We have used UML class diagram and UML state machine diagram with additional design constraints to provide resource and behavioral models, respectively, for designing REST web service interfaces. These service design models serve as a specification document and the information presented in them have manifold applications. The service design models also contain information about the time and domain requirements of the service that can help in requirement traceability which is an important part of our approach. Requirement traceability helps in capturing faults in the design models and other elements of software development environment by tracing back and forth the unfulfilled requirements of the service. The information about service actors is also included in the design models which is required for authenticating the service requests by authorized actors since not all types of users have access to all the resources. In addition, following our design approach, the service developer can ensure that the designed web service interfaces will be REST compliant. The second contribution of this thesis is consistency analysis of the behavioral REST interfaces. To overcome the inconsistency problem and design errors in our service models, we have used semantic technologies. The REST interfaces are represented in web ontology language, OWL2, that can be part of the semantic web. These interfaces are used with OWL 2 reasoners to check unsatisfiable concepts which result in implementations that fail. This work is fully automated thanks to the implemented translation tool and the existing OWL 2 reasoners. The third contribution of this thesis is the verification and validation of REST web services. We have used model checking techniques with UPPAAL model checker for this purpose. The timed automata of UML based service design models are generated with our transformation tool that are verified for their basic characteristics like deadlock freedom, liveness, reachability and safety. The implementation of a web service is tested using a black-box testing approach. Test cases are generated from the UPPAAL timed automata and using the online testing tool, UPPAAL TRON, the service implementation is validated at runtime against its specifications. Requirement traceability is also addressed in our validation approach with which we can see what service goals are met and trace back the unfulfilled service goals to detect the faults in the design models. A final contribution of the thesis is an implementation of behavioral REST interfaces and service monitors from the service design models. The partial code generation tool creates code skeletons of REST web services with method pre and post-conditions. The preconditions of methods constrain the user to invoke the stateful REST service under the right conditions and the post condition constraint the service developer to implement the right functionality. The details of the methods can be manually inserted by the developer as required. We do not target complete automation because we focus only on the interface aspects of the web service. The applicability of the approach is demonstrated with a pedagogical example of a hotel room booking service and a relatively complex worked example of holiday booking service taken from the industrial context. The former example presents a simple explanation of the approach and the later worked example shows how stateful and timed web services offering complex scenarios and involving other web services can be constructed using our approach.
Resumo:
The use of water-sensitive papers is an important tool for assessing the quality of pesticide application on crops, but manual analysis is laborious and time-consuming. Thus, this study aimed to evaluate and compare the results obtained from four software programs for spray droplet analysis in different scanned images of water-sensitive papers. After spraying, papers with four droplet deposition patterns (varying droplet spectra and densities) were analyzed manually and by means of the following computer programs: CIR, e-Sprinkle, DepositScan and Conta-Gotas. The diameter of the volume and number medians and the number of droplets per target area were studied. There is a strong correlation between the values measured using the different programs and the manual analysis, but there is a great difference between the numerical values measured for the same paper. Thus, it is not advisable to compare results obtained from different programs.
Resumo:
One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.
Resumo:
Technological innovations, the development of the internet, and globalization have increased the number and complexity of web applications. As a result, keeping web user interfaces understandable and usable (in terms of ease-of-use, effectiveness, and satisfaction) is a challenge. As part of this, designing userintuitive interface signs (i.e., the small elements of web user interface, e.g., navigational link, command buttons, icons, small images, thumbnails, etc.) is an issue for designers. Interface signs are key elements of web user interfaces because ‘interface signs’ act as a communication artefact to convey web content and system functionality, and because users interact with systems by means of interface signs. In the light of the above, applying semiotic (i.e., the study of signs) concepts on web interface signs will contribute to discover new and important perspectives on web user interface design and evaluation. The thesis mainly focuses on web interface signs and uses the theory of semiotic as a background theory. The underlying aim of this thesis is to provide valuable insights to design and evaluate web user interfaces from a semiotic perspective in order to improve overall web usability. The fundamental research question is formulated as What do practitioners and researchers need to be aware of from a semiotic perspective when designing or evaluating web user interfaces to improve web usability? From a methodological perspective, the thesis follows a design science research (DSR) approach. A systematic literature review and six empirical studies are carried out in this thesis. The empirical studies are carried out with a total of 74 participants in Finland. The steps of a design science research process are followed while the studies were designed and conducted; that includes (a) problem identification and motivation, (b) definition of objectives of a solution, (c) design and development, (d) demonstration, (e) evaluation, and (f) communication. The data is collected using observations in a usability testing lab, by analytical (expert) inspection, with questionnaires, and in structured and semi-structured interviews. User behaviour analysis, qualitative analysis and statistics are used to analyze the study data. The results are summarized as follows and have lead to the following contributions. Firstly, the results present the current status of semiotic research in UI design and evaluation and highlight the importance of considering semiotic concepts in UI design and evaluation. Secondly, the thesis explores interface sign ontologies (i.e., sets of concepts and skills that a user should know to interpret the meaning of interface signs) by providing a set of ontologies used to interpret the meaning of interface signs, and by providing a set of features related to ontology mapping in interpreting the meaning of interface signs. Thirdly, the thesis explores the value of integrating semiotic concepts in usability testing. Fourthly, the thesis proposes a semiotic framework (Semiotic Interface sign Design and Evaluation – SIDE) for interface sign design and evaluation in order to make them intuitive for end users and to improve web usability. The SIDE framework includes a set of determinants and attributes of user-intuitive interface signs, and a set of semiotic heuristics to design and evaluate interface signs. Finally, the thesis assesses (a) the quality of the SIDE framework in terms of performance metrics (e.g., thoroughness, validity, effectiveness, reliability, etc.) and (b) the contributions of the SIDE framework from the evaluators’ perspective.
Resumo:
Viral hepatitis constitutes a major health issue, with high prevalence among injecting drug users (IDUs). The present study assessed the prevalence and risk determinants for hepatitis B, C and D viruses (HBV, HCV and HDV) infections among 102 IDUs from Rio de Janeiro, Brazil. Serological markers and HCV-RNA were detected by enzyme immunoassay and nested PCR, respectively. HCV genotyping was determined by restriction fragment length polymorphism analysis (RFLP). HBsAg, anti-HBc and anti-HBs were found in 7.8, 55.8 and 24.7% of IDUs, respectively. In the final logistic regression, HBV infection was independently associated with male homosexual intercourse within the last 5 years (odds ratio (OR) 3.1; 95% confidence interval (CI) 1.1-8.8). No subject presented anti-delta (anti-HD). Anti-HCV was detected in 69.6% of subjects, and was found to be independently associated with needle sharing in the last 6 months (OR 3.4; 95% CI 1.3-9.2) and with longer duration of iv drug use (OR 3.1; 95% CI 1.1-8.7). These data demonstrate that this population is at high risk for both HBV and HCV infection. Among IDUs from Rio de Janeiro, unprotected sexual intercourse seems to be more closely associated with HBV infection, whereas HCV is positively correlated with high risk injecting behavior. Comprehensive public health interventions targeting this population and their sexual partners must be encouraged.
Resumo:
This thesis concentrates on the validation of a generic thermal hydraulic computer code TRACE under the challenges of the VVER-440 reactor type. The code capability to model the VVER-440 geometry and thermal hydraulic phenomena specific to this reactor design has been examined and demonstrated acceptable. The main challenge in VVER-440 thermal hydraulics appeared in the modelling of the horizontal steam generator. The major challenge here is not in the code physics or numerics but in the formulation of a representative nodalization structure. Another VVER-440 specialty, the hot leg loop seals, challenges the system codes functionally in general, but proved readily representable. Computer code models have to be validated against experiments to achieve confidence in code models. When new computer code is to be used for nuclear power plant safety analysis, it must first be validated against a large variety of different experiments. The validation process has to cover both the code itself and the code input. Uncertainties of different nature are identified in the different phases of the validation procedure and can even be quantified. This thesis presents a novel approach to the input model validation and uncertainty evaluation in the different stages of the computer code validation procedure. This thesis also demonstrates that in the safety analysis, there are inevitably significant uncertainties that are not statistically quantifiable; they need to be and can be addressed by other, less simplistic means, ultimately relying on the competence of the analysts and the capability of the community to support the experimental verification of analytical assumptions. This method completes essentially the commonly used uncertainty assessment methods, which are usually conducted using only statistical methods.
Resumo:
In order to assess the molecular epidemiology of HIV-1 in two neighboring cities located near the epicenter of the HIV-1 epidemics in Brazil (Santos and São Paulo), we investigated 83 HIV-1 strains obtained from samples collected in 1995 from intravenous drug users. The V3 through V5 region of the envelope of gp 120 was analyzed by heteroduplex mobility analysis. Of the 95 samples, 12 (12.6%) were PCR negative (6 samples from each group); low DNA concentration was the reason for non-amplification in half of these cases. Of the 42 typed cases from São Paulo, 34 (81%, 95% confidence limits 74.9 to 87.0%) were B and 8 (19%, 95% confidence limits 12.9 to 25.0%) were F, whereas of the 41 typed cases from Santos, 39 (95%, 95% confidence limits 91.6 to 98.4%) were B and 2 (5%, 95% confidence limits 1.6 to 8.4%) were C. We therefore confirm the relationship between clade F and intravenous drug use in São Paulo, and the presence of clade C in Santos. The fact that different genetic subtypes of HIV-1 are co-circulating indicates a need for continuous surveillance for these subtypes as well as for recombinant viruses in Brazil.
Resumo:
The aim of the present study was to measure full epidermal thickness, stratum corneum thickness, rete length, dermal papilla widening and suprapapillary epidermal thickness in psoriasis patients using a light microscope and computer-supported image analysis. The data obtained were analyzed in terms of patient age, type of psoriasis, total body surface area involvement, scalp and nail involvement, duration of psoriasis, and family history of the disease. The study was conducted on 64 patients and 57 controls whose skin biopsies were examined by light microscopy. The acquired microscopic images were transferred to a computer and measurements were made using image analysis. The skin biopsies, taken from different body areas, were examined for different parameters such as epidermal, corneal and suprapapillary epidermal thickness. The most prominent increase in thickness was detected in the palmar region. Corneal thickness was more pronounced in patients with scalp involvement than in patients without scalp involvement (t = -2.651, P = 0.008). The most prominent increase in rete length was observed in the knees (median: 491 µm, t = 10.117, P = 0.000). The difference in rete length between patients with a positive and a negative family history was significant (t = -3.334, P = 0.03), being 27% greater in psoriasis patients without a family history. The differences in dermal papilla distances among patients were very small. We conclude that microscope-supported thickness measurements provide objective results.