839 resultados para Ease of use
Resumo:
Aiming to address requirements concerning integration of services in the context of ?big data?, this paper presents an innovative approach that (i) ensures a flexible, adaptable and scalable information and computation infrastructure, and (ii) exploits the competences of stakeholders and information workers to meaningfully confront information management issues such as information characterization, classification and interpretation, thus incorporating the underlying collective intelligence. Our approach pays much attention to the issues of usability and ease-of-use, not requiring any particular programming expertise from the end users. We report on a series of technical issues concerning the desired flexibility of the proposed integration framework and we provide related recommendations to developers of such solutions. Evaluation results are also discussed.
Resumo:
When designing human-machine interfaces it is important to consider not only the bare bones functionality but also the ease of use and accessibility it provides. When talking about voice-based inter- faces, it has been proven that imbuing expressiveness into the synthetic voices increases signi?cantly its perceived naturalness, which in the end is very helpful when building user friendly interfaces. This paper proposes an adaptation based expressiveness transplantation system capable of copying the emotions of a source speaker into any desired target speaker with just a few minutes of read speech and without requiring the record- ing of additional expressive data. This system was evaluated through a perceptual test for 3 speakers showing up to an average of 52% emotion recognition rates relative to the natural voice recognition rates, while at the same time keeping good scores in similarity and naturality.
Resumo:
The software engineering community has paid little attention to non-functional requirements, or quality attributes, compared with studies performed on capture, analysis and validation of functional requirements. This circumstance becomes more intense in the case of distributed applications. In these applications we have to take into account, besides the quality attributes such as correctness, robustness, extendibility, reusability, compatibility, efficiency, portability and ease of use, others like reliability, scalability, transparency, security, interoperability, concurrency, etc. In this work we will show how these last attributes are related to different abstractions that coexist in the problem domain. To achieve this goal, we have established a taxonomy of quality attributes of distributed applications and have determined the set of necessary services to support such attributes.
Resumo:
Biomass has always been associated with the development of the population in the Canary Islands as the first source of elemental energy that was in the archipelago and the main cause of deforestation of forests, which over the years has been replaced by forest fossil fuels. The Canary Islands store a large amount of energy in the form of biomass. This may be important on a small scale for the design of small power plants with similar fuels from agricultural activities, and these plants could supply rural areas that could have self-sufficiency energy. The problem with the Canary Islands for a boost in this achievement is to ensure the supply to the consumer centers or power plants for greater efficiency that must operate continuously, allowing them to have a resource with regularity, quality and at an acceptable cost. In the Canary Islands converge also a unique topography with a very rugged terrain that makes it greater difficult to use and significantly more expensive. In this work all these aspects are studied, giving conclusions, action paths and theoretical potentials.
Resumo:
Among the different optical modulator technologies available such as polymer, III-V semiconductors, Silicon, the well-known Lithium Niobate (LN) offers the best trade-off in terms of performances, ease of use, and power handling capability [1-9]. The LN technology is still widely deployed within the current high data rate fibre optic communications networks. This technology is also the most mature and guarantees the reliability which is required for space applications [9].In or der to fulfil the target specifications of opto-microwave payloads, an optimization of the design of a Mach-Zehnder (MZ) modulator working at the 1500nm telecom wavelength was performed in the frame of the ESA-ARTES "Multi GigaHertz Optical Modulator" (MGOM) project in order to reach ultra-low optical insertion loss and low effective driving voltage in the Ka band. The selected modulator configuration was the X-cut crystal orientation, associated to high stability Titanium in-diffusion process for the optical waveguide. Starting from an initial modulator configuration exhibiting 9 V drive voltage @ 30 GHz, a complete redesign of the coplanar microwave electrodes was carried out in order to reach a 6 V drive voltage @ 30GHz version. This redesign was associated to an optimization of the interaction between the optical waveguide and the electrodes. Following the optimisation steps, an evaluation program was applied on a lot of 8 identical modulators. A full characterisation was carried out to compare performances, showing small variations between the initial and final functional characteristics. In parallel, two similar modulators were submitted to both gamma (10-100 krad) and proton irradiation (10.109 p/cm²) with minor performance degradation.
Resumo:
With the ever growing trend of smart phones and tablets, Android is becoming more and more popular everyday. With more than one billion active users i to date, Android is the leading technology in smart phone arena. In addition to that, Android also runs on Android TV, Android smart watches and cars. Therefore, in recent years, Android applications have become one of the major development sectors in software industry. As of mid 2013, the number of published applications on Google Play had exceeded one million and the cumulative number of downloads was more than 50 billionii. A 2013 survey also revealed that 71% of the mobile application developers work on developing Android applicationsiii. Considering this size of Android applications, it is quite evident that people rely on these applications on a daily basis for the completion of simple tasks like keeping track of weather to rather complex tasks like managing one’s bank accounts. Hence, like every other kind of code, Android code also needs to be verified in order to work properly and achieve a certain confidence level. Because of the gigantic size of the number of applications, it becomes really hard to manually test Android applications specially when it has to be verified for various versions of the OS and also, various device configurations such as different screen sizes and different hardware availability. Hence, recently there has been a lot of work on developing different testing methods for Android applications in Computer Science fraternity. The model of Android attracts researchers because of its open source nature. It makes the whole research model more streamlined when the code for both, application and the platform are readily available to analyze. And hence, there has been a great deal of research in testing and static analysis of Android applications. A great deal of this research has been focused on the input test generation for Android applications. Hence, there are a several testing tools available now, which focus on automatic generation of test cases for Android applications. These tools differ with one another on the basis of their strategies and heuristics used for this generation of test cases. But there is still very little work done on the comparison of these testing tools and the strategies they use. Recently, some research work has been carried outiv in this regard that compared the performance of various available tools with respect to their respective code coverage, fault detection, ability to work on multiple platforms and their ease of use. It was done, by running these tools on a total of 60 real world Android applications. The results of this research showed that although effective, these strategies being used by the tools, also face limitations and hence, have room for improvement. The purpose of this thesis is to extend this research into a more specific and attribute-‐ oriented way. Attributes refer to the tasks that can be completed using the Android platform. It can be anything ranging from a basic system call for receiving an SMS to more complex tasks like sending the user to another application from the current one. The idea is to develop a benchmark for Android testing tools, which is based on the performance related to these attributes. This will allow the comparison of these tools with respect to these attributes. For example, if there is an application that plays some audio file, will the testing tool be able to generate a test input that will warrant the execution of this audio file? Using multiple applications using different attributes, it can be visualized that which testing tool is more useful for which kinds of attributes. In this thesis, it was decided that 9 attributes covering the basic nature of tasks, will be targeted for the assessment of three testing tools. Later this can be done for much more attributes to compare even more testing tools. The aim of this work is to show that this approach is effective and can be used on a much larger scale. One of the flagship features of this work, which also differentiates it with the previous work, is that the applications used, are all specially made for this research. The reason for doing that is to analyze just that specific attribute in isolation, which the application is focused on, and not allow the tool to get bottlenecked by something trivial, which is not the main attribute under testing. This means 9 applications, each focused on one specific attribute. The main contributions of this thesis are: A summary of the three existing testing tools and their respective techniques for automatic test input generation of Android Applications. • A detailed study of the usage of these testing tools using the 9 applications specially designed and developed for this study. • The analysis of the obtained results of the study carried out. And a comparison of the performance of the selected tools.
Resumo:
The role of channel inactivation in the molecular mechanism of calcium (Ca2+) channel block by phenylalkylamines (PAA) was analyzed by designing mutant Ca2+ channels that carry the high affinity determinants of the PAA receptor site [Hockerman, G. H., Johnson, B. D., Scheuer, T., and Catterall, W. A. (1995) J. Biol. Chem. 270, 22119–22122] but inactivate at different rates. Use-dependent block by PAAs was studied after expressing the mutant Ca2+ channels in Xenopus oocytes. Substitution of single putative pore-orientated amino acids in segment IIIS6 by alanine (F-1499-A, F-1500-A, F-1510-A, I-1514-A, and F-1515-A) gradually slowed channel inactivation and simultaneously reduced inhibition of barium currents (IBa) by (−)D600 upon depolarization by 100 ms steps at 0.1 Hz. This apparent reduction in drug sensitivity was only evident if test pulses were applied at a low frequency of 0.1 Hz and almost disappeared at the frequency of 1 Hz. (−)D600 slowed IBa recovery after maintained membrane depolarization (1–3 sec) to a comparable extent in all channel constructs. A drug-induced delay in the onset of IBa recovery from inactivation suggests that PAAs promote the transition to a deep inactivated channel conformation. These findings indicate that apparent PAA sensitivity of Ca2+ channels is not only defined by drug interaction with its receptor site but also crucially dependent on intrinsic gating properties of the channel molecule. A molecular model for PAA-Ca2+ channel interaction that accounts for the relationship between drug induced inactivation and channel block by PAA is proposed.
Resumo:
Objectives: To assess whether the levonorgestrel intrauterine system could provide a conservative alternative to hysterectomy in the treatment of excessive uterine bleeding.
Resumo:
We have developed a semi-synthetic approach for preparing long stretches of DNA (>100 bp) containing internal chemical modifications and/or non-Watson–Crick structural motifs which relies on splint-free, cell-free DNA ligations and recycling of side-products by non-PCR thermal cycling. A double-stranded DNA PCR fragment containing a polylinker in its middle is digested with two restriction enzymes and a small insert (∼20 bp) containing the modification or non-Watson–Crick motif of interest is introduced into the middle. Incorrect products are recycled to starting materials by digestion with appropriate restriction enzymes, while the correct product is resistant to digestion since it does not contain these restriction sites. This semi-synthetic approach offers several advantages over DNA splint-mediated ligations, including fewer steps, substantially higher yields (∼60% overall yield) and ease of use. This method has numerous potential applications, including the introduction of modifications such as fluorophores and cross-linking agents into DNA, controlling the shape of DNA on a large scale and the study of non-sequence-specific nucleic acid–protein interactions.
Resumo:
This paper provides an overview of a case study research that investigated the use of Digital Library (DL) resources in two undergraduate classes and explored faculty and students’ perceptions of educational digital libraries. This study found that students and faculty use academic DLs primarily for textual resources, but turn to the open Web for visual and multimedia resources. The study participants did not perceive academic libraries as a useful source of digital images and used search engines when searching for visual resources. The limited use of digital library resources for teaching and learning is associated with perceptions of usefulness and ease of use, especially if considered in a broader information landscape, in conjunction with other library information systems, and in the context of Web resources. The limited use of digital libraries is related to the following perceptions: 1) Library systems are not viewed as user-friendly, which in turn discourages potential users from trying DLs provided by academic libraries; 2) Academic libraries are perceived as places of primarily textual resources; perceptions of usefulness, especially in regard to relevance of content, coverage, and currency, seem to have a negative effect on user intention to use DLs, especially when searching for visual materials.
Empirical study on the maintainability of Web applications: Model-driven Engineering vs Code-centric
Resumo:
Model-driven Engineering (MDE) approaches are often acknowledged to improve the maintainability of the resulting applications. However, there is a scarcity of empirical evidence that backs their claimed benefits and limitations with respect to code-centric approaches. The purpose of this paper is to compare the performance and satisfaction of junior software maintainers while executing maintainability tasks on Web applications with two different development approaches, one being OOH4RIA, a model-driven approach, and the other being a code-centric approach based on Visual Studio .NET and the Agile Unified Process. We have conducted a quasi-experiment with 27 graduated students from the University of Alicante. They were randomly divided into two groups, and each group was assigned to a different Web application on which they performed a set of maintainability tasks. The results show that maintaining Web applications with OOH4RIA clearly improves the performance of subjects. It also tips the satisfaction balance in favor of OOH4RIA, although not significantly. Model-driven development methods seem to improve both the developers’ objective performance and subjective opinions on ease of use of the method. This notwithstanding, further experimentation is needed to be able to generalize the results to different populations, methods, languages and tools, different domains and different application sizes.
Project SCORE! Coaches’ Perceptions of an Online Tool to Promote Positive Youth Development in Sport
Resumo:
Research points to the potential of youth sport as an avenue to support the growth of particular assets and outcomes. A recurring theme in this line of research is the need to train coaches to deliberately deliver themes relating to positive youth development (PYD) consistently in youth sport programs. The purpose of the study was to design and deliver a technology-based PYD program. Project SCORE! (www.projectscore.ca) is a series of 10 lessons to help coaches integrate PYD into sport. Four youth sport coaches completed the program in this first phase of this research and were interviewed. The goal of this study was to gain some insights from coaches as they completed the program. Positive comments about the program (i.e. ease of use, success of particular lessons, coach’s personal growth) and challenges regarding teaching positive skills to youth are discussed. These results helped to shape the program and make necessary changes so that it may be used for a larger research study. Other implications and future research directions are discussed.