851 resultados para user-profiling
Resumo:
Innovation is an essential factor for obtaining competitive advantages. The search for external knowledge sources for product creation, which can contribute to the innovation process, has become a constant among companies, and users play an important role in this search. In this study, we aimed to analyze user’s involvement in the product development process based on open innovation concepts. We used the unique case study research method. This study was carried out in an automotive company that has developed a project of a concept car involving user’s through the Web 2.0. With such scope, the research demonstrates that users can contribute not only with generation of ideas but also with the innovation process itself.
Resumo:
Surveys of commercial markets combined with molecular taxonomy (i.e. molecular monitoring) provide a means to detect products from illegal, unregulated and/or unreported (IUU) exploitation, including the sale of fisheries bycatch and wild meat (bushmeat). Capture-recapture analyses of market products using DNA profiling have the potential to estimate the total number of individuals entering the market. However, these analyses are not directly analogous to those of living individuals because a ‘market individual’ does not die suddenly but, instead, remains available for a time in decreasing quantities, rather like the exponential decay of a radioactive isotope. Here we use mitochondrial DNA (mtDNA) sequences and microsatellite genotypes to individually identify products from North Pacific minke whales (Balaenoptera acutorostrata ssp.) purchased in 12 surveys of markets in the Republic of (South) Korea from 1999 to 2003. By applying a novel capture-recapture model with a decay rate parameter to the 205 unique DNA profiles found among 289 products, we estimated that the total number of whales entering trade across the five-year survey period was 827 (SE, 164; CV, 0.20) and that the average ‘half-life’ of products from an individual whale on the market was 1.82 months (SE, 0.24; CV, 0.13). Our estimate of whales in trade (reflecting the true numbers killed) was significantly greater than the officially reported bycatch of 458 whales for this period. This unregulated exploitation has serious implications for the survival of this genetically distinct coastal population. Although our capture-recapture model was developed for specific application to the Korean whale-meat markets, the exponential decay function could be modified to improve the estimates of trade in other wildmeat or fisheries markets or abundance of living populations by noninvasive genotyping.
Resumo:
End users develop more software than any other group of programmers, using software authoring devices such as e-mail filtering editors, by-demonstration macro builders, and spreadsheet environments. Despite this, there has been little research on finding ways to help these programmers with the dependability of their software. We have been addressing this problem in several ways, one of which includes supporting end-user debugging activities through fault localization techniques. This paper presents the results of an empirical study conducted in an end-user programming environment to examine the impact of two separate factors in fault localization techniques that affect technique effectiveness. Our results shed new insights into fault localization techniques for end-user programmers and the factors that affect them, with significant implications for the evaluation of those techniques.
Resumo:
Not long ago, most software was written by professional programmers, who could be presumed to have an interest in software engineering methodologies and in tools and techniques for improving software dependability. Today, however, a great deal of software is written not by professionals but by end-users, who create applications such as multimedia simulations, dynamic web pages, and spreadsheets. Applications such as these are often used to guide important decisions or aid in important tasks, and it is important that they be sufficiently dependable, but evidence shows that they frequently are not. For example, studies have shown that a large percentage of the spreadsheets created by end-users contain faults, and stories abound of spreadsheet faults that have led to multi-million dollar losses. Despite such evidence, until recently, relatively little research had been done to help end-users create more dependable software.
Resumo:
Not long ago, most software was written by professional programmers, who could be presumed to have an interest in software engineering methodologies and in tools and techniques for improving software dependability. Today, however, a great deal of software is written not by professionals but by end-users, who create applications such as multimedia simulations, dynamic web pages, and spreadsheets. Applications such as these are often used to guide important decisions or aid in important tasks, and it is important that they be sufficiently dependable, but evidence shows that they frequently are not. For example, studies have shown that a large percentage of the spreadsheets created by end-users contain faults. Despite such evidence, until recently, relatively little research had been done to help end-users create more dependable software. We have been working to address this problem by finding ways to provide at least some of the benefits of formal software engineering techniques to end-user programmers. In this talk, focusing on the spreadsheet application paradigm, I present several of our approaches, focusing on methodologies that utilize source-code-analysis techniques to help end-users build more dependable spreadsheets. Behind the scenes, our methodologies use static analyses such as dataflow analysis and slicing, together with dynamic analyses such as execution monitoring, to support user tasks such as validation and fault localization. I show how, to accommodate the user base of spreadsheet languages, an interface to these methodologies can be provided in a manner that does not require an understanding of the theory behind the analyses, yet supports the interactive, incremental process by which spreadsheets are created. Finally, I present empirical results gathered in the use of our methodologies that highlight several costs and benefits trade-offs, and many opportunities for future work.
Resumo:
Recommendations • Become a beta partner with vendor • Test load collections before going live • Update cataloging codes to benefit your community • Don’t expect to drastically change cataloging practices
Resumo:
The ability to utilize information systems (IS) effectively is becoming a necessity for business professionals. However, individuals differ in their abilities to use IS effectively, with some achieving exceptional performance in IS use and others being unable to do so. Therefore, developing a set of skills and attributes to achieve IS user competency, or the ability to realize the fullest potential and the greatest performance from IS use, is important. Various constructs have been identified in the literature to describe IS users with regard to their intentions to use IS and their frequency of IS usage, but studies to describe the relevant characteristics associated with highly competent IS users, or those who have achieved IS user competency, are lacking. This research develops a model of IS user competency by using the Repertory Grid Technique to identify a broad set of characteristics of highly competent IS users. A qualitative analysis was carried out to identify categories and sub-categories of these characteristics. Then, based on the findings, a subset of the model of IS user competency focusing on the IS-specific factors – domain knowledge of and skills in IS, willingness to try and to explore IS, and perception of IS value – was developed and validated using the survey approach. The survey findings suggest that all three factors are relevant and important to IS user competency, with willingness to try and to explore IS being the most significant factor. This research generates a rich set of factors explaining IS user competency, such as perception of IS value. The results not only highlight characteristics that can be fostered in IS users to improve their performance with IS use, but also present research opportunities for IS training and potential hiring criteria for IS users in organizations.
Resumo:
Mashups are becoming increasingly popular as end users are able to easily access, manipulate, and compose data from several web sources. To support end users, communities are forming around mashup development environments that facilitate sharing code and knowledge. We have observed, however, that end user mashups tend to suffer from several deficiencies, such as inoperable components or references to invalid data sources, and that those deficiencies are often propagated through the rampant reuse in these end user communities. In this work, we identify and specify ten code smells indicative of deficiencies we observed in a sample of 8,051 pipe-like web mashups developed by thousands of end users in the popular Yahoo! Pipes environment. We show through an empirical study that end users generally prefer pipes that lack those smells, and then present eleven specialized refactorings that we designed to target and remove the smells. Our refactorings reduce the complexity of pipes, increase their abstraction, update broken sources of data and dated components, and standardize pipes to fit the community development patterns. Our assessment on the sample of mashups shows that smells are present in 81% of the pipes, and that the proposed refactorings can reduce that number to 16%, illustrating the potential of refactoring to support thousands of end users developing pipe-like mashups.
Resumo:
This paper presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the image foresting transform with a never-exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points), as compared to live wire for objects with complex shapes. This paper also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.
Resumo:
Positive selection (PS) in the thymus involves the presentation of self-peptides that are bound to MHC class II on the surface of cortical thymus epithelial cells (cTECs). Prss16 gene corresponds to one important element regulating the PS of CD4(+) T lymphocytes, which encodes Thymus-specific serine protease (Tssp), a cTEC serine-type peptidase involved in the proteolytic generation of self-peptides. Nevertheless, additional peptidase genes participating in the generation of self-peptides need to be found. Because of its role in the mechanism of PS and its expression in cTECs, the Prss16 gene might be used as a transcriptional marker to identify new genes that share the same expression profile and that encode peptidases in the thymus. To test this hypothesis, we compared the differential thymic expression of 4,500 mRNAs of wild-type (WT) C57BL/6 mice with their respective Prss16-knockout (KO) mutants by using microarrays. From these, 223 genes were differentially expressed, of which 115 had known molecular/biological functions. Four endopeptidase genes (Casp1, Casp2, Psmb3 and Tpp2) share the same expression profile as the Prss16 gene; i.e., induced in WT and repressed in KO while one endopeptidase gene, Capns1, features opposite expression profile. The Tpp2 gene is highlighted because it encodes a serine-type endopeptidase functionally similar to the Tssp enzyme. Profiling of the KO mice featured down-regulation of Prss16, as expected, along with the genes mentioned above. Considering that the Prss16-KO mice featured impaired PS, the shared regulation of the four endopeptidase genes suggested their participation in the mechanism of self-peptide generation and PS.
Resumo:
Rear-fanged and aglyphous snakes are usually considered not dangerous to humans because of their limited capacity of injecting venom. Therefore, only a few studies have been dedicated to characterizing the venom of the largest parcel of snake fauna. Here, we investigated the venom proteome of the rear-fanged snake Thamnodynastes strigatus, in combination with a transcriptomic evaluation of the venom gland. About 60% of all transcripts code for putative venom components. A striking finding is that the most abundant type of transcript (similar to 47%) and also the major protein type in the venom correspond to a new kind of matrix metalloproteinase (MMP) that is unrelated to the classical snake venom metalloproteinases found in all snake families. These enzymes were recently suggested as possible venom components, and we show here that they are proteolytically active and probably recruited to venom from a MMP-9 ancestor. Other unusual proteins were suggested to be venom components: a protein related to lactadherin and an EGF repeat-containing transcript. Despite these unusual molecules, seven toxin classes commonly found in typical venomous snakes are also present in the venom. These results support the evidence that the arsenals of these snakes are very diverse and harbor new types of biologically important molecules.
Resumo:
The combination of solid-phase microextraction (SPME) with comprehensive two-dimensional gas chromatography is evaluated here for fatty acid (FA) profiling of the glycerophospholipid fraction from human buccal mucosal cells. A base-catalyzed derivatization reaction selective for polar lipids such as glycerophospholipid was adopted. SPME is compared to a miniaturized liquidliquid extraction procedure for the isolation of FA methyl esters produced in the derivatization step. The limits of detection and limits of quantitation were calculated for each sample preparation method. Because of its lower values of limits of detection and quantitation, SPME was adopted. The extracted analytes were separated, detected, and quantified by comprehensive two-dimensional gas chromatography with flame ionization detection (FID). The combination of SPME and comprehensive two-dimensional gas chromatography with FID, using a selective derivatization reaction in the preliminary steps, proved to be a simple and fast procedure for FA profiling, and was successfully applied to the analysis of adult human buccal mucosal cells.
Resumo:
This paper presents a performance analysis of a baseband multiple-input single-output ultra-wideband system over scenarios CM1 and CM3 of the IEEE 802.15.3a channel model, incorporating four different schemes of pre-distortion: time reversal, zero-forcing pre-equaliser, constrained least squares pre-equaliser, and minimum mean square error pre-equaliser. For the third case, a simple solution based on the steepest-descent (gradient) algorithm is adopted and compared with theoretical results. The channel estimations at the transmitter are assumed to be truncated and noisy. Results show that the constrained least squares algorithm has a good trade-off between intersymbol interference reduction and signal-to-noise ratio preservation, providing a performance comparable to the minimum mean square error method but with lower computational complexity. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Landfarm soils are employed in industrial and petrochemical residue bioremediation. This process induces selective pressure directed towards microorganisms capable of degrading toxic compounds. Detailed description of taxa in these environments is difficult due to a lack of knowledge of culture conditions required for unknown microorganisms. A metagenomic approach permits identification of organisms without the need for culture. However, a DNA extraction step is first required, which can bias taxonomic representativeness and interfere with cloning steps by extracting interference substances. We developed a simplified DNA extraction procedure coupled with metagenomic DNA amplification in an effort to overcome these limitations. The amplified sequences were used to generate a metagenomic data set and the taxonomic and functional representativeness were evaluated in comparison with a data set built with DNA extracted by conventional methods. The simplified and optimized method of RAPD to access metagenomic information provides better representativeness of the taxonomical and metabolic aspects of the environmental samples.
Resumo:
Ubiquitous Computing promises seamless access to a wide range of applications and Internet based services from anywhere, at anytime, and using any device. In this scenario, new challenges for the practice of software development arise: Applications and services must keep a coherent behavior, a proper appearance, and must adapt to a plenty of contextual usage requirements and hardware aspects. Especially, due to its interactive nature, the interface content of Web applications must adapt to a large diversity of devices and contexts. In order to overcome such obstacles, this work introduces an innovative methodology for content adaptation of Web 2.0 interfaces. The basis of our work is to combine static adaption - the implementation of static Web interfaces; and dynamic adaptation - the alteration, during execution time, of static interfaces so as for adapting to different contexts of use. In hybrid fashion, our methodology benefits from the advantages of both adaptation strategies - static and dynamic. In this line, we designed and implemented UbiCon, a framework over which we tested our concepts through a case study and through a development experiment. Our results show that the hybrid methodology over UbiCon leads to broader and more accessible interfaces, and to faster and less costly software development. We believe that the UbiCon hybrid methodology can foster more efficient and accurate interface engineering in the industry and in the academy.