901 resultados para user requirement
Resumo:
There is a growing search for continuous improvement within the companies which creates an obligation of reducing and when it is possible eliminating waste. Production Planning and Control Department (PCP) is not out of this question, making necessary the application of methods and creation of tools that eliminate steps which do not add value to the planning process. This paper aims to develop a tool which concentrates in just one place all the necessary information to make the packaging material requirement planning (MRP) in a agribusiness company. Besides, it also aims, in a more visual way and using devices that prevent mistakes (Poka-Yoke), to reduce the number of reviews and mistakes made by analysts. As a result, an Excel spreadsheet was developed. This spreadsheet shows what happens with the status of planning and receiving of packaging, giving some advices when some critical situation happens. The use of Lean Manufacturing Method and the action research method helped to well define the problem and to reduce the number of steps, spreadsheets and time of process in 80%, 60% and 75%, respectively
Resumo:
Innovation is an essential factor for obtaining competitive advantages. The search for external knowledge sources for product creation, which can contribute to the innovation process, has become a constant among companies, and users play an important role in this search. In this study, we aimed to analyze user’s involvement in the product development process based on open innovation concepts. We used the unique case study research method. This study was carried out in an automotive company that has developed a project of a concept car involving user’s through the Web 2.0. With such scope, the research demonstrates that users can contribute not only with generation of ideas but also with the innovation process itself.
Resumo:
End users develop more software than any other group of programmers, using software authoring devices such as e-mail filtering editors, by-demonstration macro builders, and spreadsheet environments. Despite this, there has been little research on finding ways to help these programmers with the dependability of their software. We have been addressing this problem in several ways, one of which includes supporting end-user debugging activities through fault localization techniques. This paper presents the results of an empirical study conducted in an end-user programming environment to examine the impact of two separate factors in fault localization techniques that affect technique effectiveness. Our results shed new insights into fault localization techniques for end-user programmers and the factors that affect them, with significant implications for the evaluation of those techniques.
Resumo:
Not long ago, most software was written by professional programmers, who could be presumed to have an interest in software engineering methodologies and in tools and techniques for improving software dependability. Today, however, a great deal of software is written not by professionals but by end-users, who create applications such as multimedia simulations, dynamic web pages, and spreadsheets. Applications such as these are often used to guide important decisions or aid in important tasks, and it is important that they be sufficiently dependable, but evidence shows that they frequently are not. For example, studies have shown that a large percentage of the spreadsheets created by end-users contain faults, and stories abound of spreadsheet faults that have led to multi-million dollar losses. Despite such evidence, until recently, relatively little research had been done to help end-users create more dependable software.
Resumo:
Not long ago, most software was written by professional programmers, who could be presumed to have an interest in software engineering methodologies and in tools and techniques for improving software dependability. Today, however, a great deal of software is written not by professionals but by end-users, who create applications such as multimedia simulations, dynamic web pages, and spreadsheets. Applications such as these are often used to guide important decisions or aid in important tasks, and it is important that they be sufficiently dependable, but evidence shows that they frequently are not. For example, studies have shown that a large percentage of the spreadsheets created by end-users contain faults. Despite such evidence, until recently, relatively little research had been done to help end-users create more dependable software. We have been working to address this problem by finding ways to provide at least some of the benefits of formal software engineering techniques to end-user programmers. In this talk, focusing on the spreadsheet application paradigm, I present several of our approaches, focusing on methodologies that utilize source-code-analysis techniques to help end-users build more dependable spreadsheets. Behind the scenes, our methodologies use static analyses such as dataflow analysis and slicing, together with dynamic analyses such as execution monitoring, to support user tasks such as validation and fault localization. I show how, to accommodate the user base of spreadsheet languages, an interface to these methodologies can be provided in a manner that does not require an understanding of the theory behind the analyses, yet supports the interactive, incremental process by which spreadsheets are created. Finally, I present empirical results gathered in the use of our methodologies that highlight several costs and benefits trade-offs, and many opportunities for future work.
Resumo:
Recommendations • Become a beta partner with vendor • Test load collections before going live • Update cataloging codes to benefit your community • Don’t expect to drastically change cataloging practices
Resumo:
The ability to utilize information systems (IS) effectively is becoming a necessity for business professionals. However, individuals differ in their abilities to use IS effectively, with some achieving exceptional performance in IS use and others being unable to do so. Therefore, developing a set of skills and attributes to achieve IS user competency, or the ability to realize the fullest potential and the greatest performance from IS use, is important. Various constructs have been identified in the literature to describe IS users with regard to their intentions to use IS and their frequency of IS usage, but studies to describe the relevant characteristics associated with highly competent IS users, or those who have achieved IS user competency, are lacking. This research develops a model of IS user competency by using the Repertory Grid Technique to identify a broad set of characteristics of highly competent IS users. A qualitative analysis was carried out to identify categories and sub-categories of these characteristics. Then, based on the findings, a subset of the model of IS user competency focusing on the IS-specific factors – domain knowledge of and skills in IS, willingness to try and to explore IS, and perception of IS value – was developed and validated using the survey approach. The survey findings suggest that all three factors are relevant and important to IS user competency, with willingness to try and to explore IS being the most significant factor. This research generates a rich set of factors explaining IS user competency, such as perception of IS value. The results not only highlight characteristics that can be fostered in IS users to improve their performance with IS use, but also present research opportunities for IS training and potential hiring criteria for IS users in organizations.
Resumo:
Mashups are becoming increasingly popular as end users are able to easily access, manipulate, and compose data from several web sources. To support end users, communities are forming around mashup development environments that facilitate sharing code and knowledge. We have observed, however, that end user mashups tend to suffer from several deficiencies, such as inoperable components or references to invalid data sources, and that those deficiencies are often propagated through the rampant reuse in these end user communities. In this work, we identify and specify ten code smells indicative of deficiencies we observed in a sample of 8,051 pipe-like web mashups developed by thousands of end users in the popular Yahoo! Pipes environment. We show through an empirical study that end users generally prefer pipes that lack those smells, and then present eleven specialized refactorings that we designed to target and remove the smells. Our refactorings reduce the complexity of pipes, increase their abstraction, update broken sources of data and dated components, and standardize pipes to fit the community development patterns. Our assessment on the sample of mashups shows that smells are present in 81% of the pipes, and that the proposed refactorings can reduce that number to 16%, illustrating the potential of refactoring to support thousands of end users developing pipe-like mashups.
Resumo:
This paper presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the image foresting transform with a never-exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points), as compared to live wire for objects with complex shapes. This paper also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.
Resumo:
2-Cys peroxiredoxin (Prx) enzymes are ubiquitously distributed peroxidases that make use of a peroxidatic cysteine (Cys(P)) to decompose hydroperoxides. A disulfide bond is generated as a consequence of the partial unfolding of the alpha-helix that contains Cys(P). Therefore, during its catalytic cycle, 2-Cys Prx alternates between two states, locally unfolded and fully folded. Tsa1 (thiol-specific antioxidant protein 1 from yeast) is by far the most abundant Cys-based peroxidase in Saccharomyces cerevisiae. In this work, we present the crystallographic structure at 2.8 angstrom resolution of Tsa1(C47S) in the decameric form [(alpha(2))(5)] with a DTT molecule bound to the active site, representing one of the few available reports of a 2-Cys Prx (AhpC-Prx1 subfamily) (AhpC, alkyl hydroperoxide reductase subunit C) structure that incorporates a ligand. The analysis of the Tsa1(C47S) structure indicated that G1u50 and Arg146 participate in the stabilization of the Cys(P) alpha-helix. As a consequence, we raised the hypothesis that G1u50 and Arg146 might be relevant to the Cys(P) reactivity. Therefore, Tsa1(E50A) and Tsa1(R146Q) mutants were generated and were still able to decompose hydrogen peroxide, presenting a second-order rate constant in the range of 10(6) M-1 S-1. Remarkably, although Tsa1(E50A) and Tsa1(R146Q) were efficiently reduced by the low-molecular-weight reductant DTT, these mutants displayed only marginal thioredoxin (Trx)-dependent peroxidase activity, indicating that G1u50 and Arg146 are important for the Tsa1-Trx interaction. These results may impact the comprehension of downstream events of signaling pathways that are triggered by the oxidation of critical Cys residues, such as Trx. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a performance analysis of a baseband multiple-input single-output ultra-wideband system over scenarios CM1 and CM3 of the IEEE 802.15.3a channel model, incorporating four different schemes of pre-distortion: time reversal, zero-forcing pre-equaliser, constrained least squares pre-equaliser, and minimum mean square error pre-equaliser. For the third case, a simple solution based on the steepest-descent (gradient) algorithm is adopted and compared with theoretical results. The channel estimations at the transmitter are assumed to be truncated and noisy. Results show that the constrained least squares algorithm has a good trade-off between intersymbol interference reduction and signal-to-noise ratio preservation, providing a performance comparable to the minimum mean square error method but with lower computational complexity. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Ubiquitous Computing promises seamless access to a wide range of applications and Internet based services from anywhere, at anytime, and using any device. In this scenario, new challenges for the practice of software development arise: Applications and services must keep a coherent behavior, a proper appearance, and must adapt to a plenty of contextual usage requirements and hardware aspects. Especially, due to its interactive nature, the interface content of Web applications must adapt to a large diversity of devices and contexts. In order to overcome such obstacles, this work introduces an innovative methodology for content adaptation of Web 2.0 interfaces. The basis of our work is to combine static adaption - the implementation of static Web interfaces; and dynamic adaptation - the alteration, during execution time, of static interfaces so as for adapting to different contexts of use. In hybrid fashion, our methodology benefits from the advantages of both adaptation strategies - static and dynamic. In this line, we designed and implemented UbiCon, a framework over which we tested our concepts through a case study and through a development experiment. Our results show that the hybrid methodology over UbiCon leads to broader and more accessible interfaces, and to faster and less costly software development. We believe that the UbiCon hybrid methodology can foster more efficient and accurate interface engineering in the industry and in the academy.
Resumo:
Abstract Background Several mathematical and statistical methods have been proposed in the last few years to analyze microarray data. Most of those methods involve complicated formulas, and software implementations that require advanced computer programming skills. Researchers from other areas may experience difficulties when they attempting to use those methods in their research. Here we present an user-friendly toolbox which allows large-scale gene expression analysis to be carried out by biomedical researchers with limited programming skills. Results Here, we introduce an user-friendly toolbox called GEDI (Gene Expression Data Interpreter), an extensible, open-source, and freely-available tool that we believe will be useful to a wide range of laboratories, and to researchers with no background in Mathematics and Computer Science, allowing them to analyze their own data by applying both classical and advanced approaches developed and recently published by Fujita et al. Conclusion GEDI is an integrated user-friendly viewer that combines the state of the art SVR, DVAR and SVAR algorithms, previously developed by us. It facilitates the application of SVR, DVAR and SVAR, further than the mathematical formulas present in the corresponding publications, and allows one to better understand the results by means of available visualizations. Both running the statistical methods and visualizing the results are carried out within the graphical user interface, rendering these algorithms accessible to the broad community of researchers in Molecular Biology.
Resumo:
AIM: To analyze the search for Emergency Care (EC) in the Western Health District of Ribeirão Preto (São Paulo), in order to identify the reasons why users turn to these services in situations that are not characterized as urgencies and emergencies. METHODS: A qualitative and descriptive study was undertaken. A guiding script was applied to 23 EC users, addressing questions related to health service accessibility and welcoming, problem solving, reason to visit the EC and care comprehensiveness. RESULTS: The subjects reported that, at the Primary Health Care services, receiving care and scheduling consultations took a long time and that the opening hours of these services coincide with their work hours. At the EC service, access to technologies and medicines was easier. CONCLUSION: Primary health care services have been unable to turn into the entry door to the health system, being replaced by emergency services, putting a significant strain on these services' capacity.
Resumo:
In this paper, we present a novel approach to perform similarity queries over medical images, maintaining the semantics of a given query posted by the user. Content-based image retrieval systems relying on relevance feedback techniques usually request the users to label relevant/irrelevant images. Thus, we present a highly effective strategy to survey user profiles, taking advantage of such labeling to implicitly gather the user perceptual similarity. The profiles maintain the settings desired for each user, allowing tuning of the similarity assessment, which encompasses the dynamic change of the distance function employed through an interactive process. Experiments on medical images show that the method is effective and can improve the decision making process during analysis.