962 resultados para End-user Queries
Resumo:
Aplicativos móveis de celulares que coletam dados pessoais estão cada vez mais presentes na rotina do cidadão comum. Associado a estas aplicações, há polêmicas sobre riscos de segurança e de invasão de privacidade, que podem se tornar entraves para aceitação destes sistemas por parte dos usuários. Por outro lado, discute-se o Paradoxo da Privacidade, em que os consumidores revelam mais informações pessoais voluntariamente, apesar de declarar que reconhecem os riscos. Há pouco consenso, nas pesquisas acadêmicas, sobre os motivos deste paradoxo ou mesmo se este fenômeno realmente existe. O objetivo desta pesquisa é analisar como a coleta de informações sensíveis influencia a escolha de aplicativos móveis. A metodologia é o estudo de aplicativos disponíveis em lojas virtuais para celulares através de técnicas qualitativas e quantitativas. Os resultados indicam que os produtos mais populares da loja são aqueles que coletam mais dados pessoais. Porém, em uma análise minuciosa, observa-se que aqueles mais buscados também pertencem a empresas de boa reputação e possuem mais funcionalidades, que exigem maior acesso aos dados privativos do celular. Na survey realizada em seguida, nota-se que os consumidores reduzem o uso dos aplicativos quando consideram que o produto coleta dados excessivamente, mas a estratégia para proteger essas informações pode variar. No grupo dos usuários que usam aplicativos que coletam dados excessivamente, conclui-se que o motivo primordial para compartilhar informações pessoais são as funcionalidades. Além disso, esta pesquisa confirma que comparar os dados solicitados pelos aplicativos com a expectativa inicial do consumidor é um constructo complementar para avaliar preocupações com privacidade, ao invés de simplesmente analisar a quantidade de informações coletadas. O processo desta pesquisa também ilustrou que, dependendo do método utilizado para análise, é possível chegar a resultados opostos sobre a ocorrência ou não do paradoxo. Isso pode dar indícios sobre os motivos da falta de consenso sobre o assunto
Resumo:
Subsidence related to multiple natural and human-induced processes affects an increasing number of areas worldwide. Although this phenomenon may involve surface deformation with 3D displacement components, negative vertical movement, either progressive or episodic, tends to dominate. Over the last decades, differential SAR interferometry (DInSAR) has become a very useful remote sensing tool for accurately measuring the spatial and temporal evolution of surface displacements over broad areas. This work discusses the main advantages and limitations of addressing active subsidence phenomena by means of DInSAR techniques from an end-user point of view. Special attention is paid to the spatial and temporal resolution, the precision of the measurements, and the usefulness of the data. The presented analysis is focused on DInSAR results exploitation of various ground subsidence phenomena (groundwater withdrawal, soil compaction, mining subsidence, evaporite dissolution subsidence, and volcanic deformation) with different displacement patterns in a selection of subsidence areas in Spain. Finally, a cost comparative study is performed for the different techniques applied.
Resumo:
Rock mass classification systems are widely used tools for assessing the stability of rock slopes. Their calculation requires the prior quantification of several parameters during conventional fieldwork campaigns, such as the orientation of the discontinuity sets, the main properties of the existing discontinuities and the geo-mechanical characterization of the intact rock mass, which can be time-consuming and an often risky task. Conversely, the use of relatively new remote sensing data for modelling the rock mass surface by means of 3D point clouds is changing the current investigation strategies in different rock slope engineering applications. In this paper, the main practical issues affecting the application of Slope Mass Rating (SMR) for the characterization of rock slopes from 3D point clouds are reviewed, using three case studies from an end-user point of view. To this end, the SMR adjustment factors, which were calculated from different sources of information and processes, using the different softwares, are compared with those calculated using conventional fieldwork data. In the presented analysis, special attention is paid to the differences between the SMR indexes derived from the 3D point cloud and conventional field work approaches, the main factors that determine the quality of the data and some recognized practical issues. Finally, the reliability of Slope Mass Rating for the characterization of rocky slopes is highlighted.
Resumo:
Online geographic information systems provide the means to extract a subset of desired spatial information from a larger remote repository. Data retrieved representing real-world geographic phenomena are then manipulated to suit the specific needs of an end-user. Often this extraction requires the derivation of representations of objects specific to a particular resolution or scale from a single original stored version. Currently standard spatial data handling techniques cannot support the multi-resolution representation of such features in a database. In this paper a methodology to store and retrieve versions of spatial objects at, different resolutions with respect to scale using standard database primitives and SQL is presented. The technique involves heavy fragmentation of spatial features that allows dynamic simplification into scale-specific object representations customised to the display resolution of the end-user's device. Experimental results comparing the new approach to traditional R-Tree indexing and external object simplification reveal the former performs notably better for mobile and WWW applications where client-side resources are limited and retrieved data loads are kept relatively small.
Resumo:
It was hypothesized that employees' perceptions of an organizational culture strong in human relations values and open systems values would be associated with heightened levels of readiness for change which, in turn, would be predictive of change implementation success. Similarly, it was predicted that reshaping capabilities would lead to change implementation success, via its effects on employees' perceptions of readiness for change. Using a temporal research design, these propositions were tested for 67 employees working in a state government department who were about to undergo the implementation of a new end-user computing system in their workplace. Change implementation success was operationalized as user satisfaction and system usage. There was evidence to suggest that employees who perceived strong human relations values in their division at Time 1 reported higher levels of readiness for change at pre-implementation which, in turn, predicted system usage at Time 2. In addition, readiness for change mediated the relationship between reshaping capabilities and system usage. Analyses also revealed that pre-implementation levels of readiness for change exerted a positive main effect on employees' satisfaction with the system's accuracy, user friendliness, and formatting functions at post-implementation. These findings are discussed in terms of their theoretical contribution to the readiness for change literature, and in relation to the practical importance of developing positive change attitudes among employees if change initiatives are to be successful.
Resumo:
A recent focus on intermediary compensation underscores the need to organize the many complex incentives used by channel practitioners. Employing a grounded theory methodology, a channel incentives classification scheme is induced from 170 unique channel incentives used in 59 high technology suppliers’ channel programs. The incentives are organized into 16 subcategories and 5 major categories: Credible Channel Policies, Market Development Support, Supplemental Contact, High-Powered Incentives, and End-User Encouragements. Each incentive subcategory is discussed as a means of controlling reseller behaviors. Also, the conditions that give rise to the implementation of incentives are investigated through four testable research propositions.
Resumo:
Relationships between organizations can be characterized by cooperation, conflict, and change. In this dissertation we study cooperation between organizations by investigating how norms in relationships can enhance innovativeness and subsequently impact relationship performance. We do so by incorporating both beneficial aspects of long term relationships as well as “dark side” factors that may decrease innovativeness. This provides a balanced assessment of the factors increasing and decreasing the performance of relationships. Next, we study conflict between organizations by taking a network view on conflict which helps explain why organizations react to conflict. We find stakeholders to have an effect on channel conflict responsiveness. Finally we study change by means of an organization’s ability to successfully add an Internet channel to their distribution system in order to sell its products or services directly to the end-user. We find that an Internet channel is best implemented by organizations that are flexible and we identify several circumstances under which this flexibility is highest.
Resumo:
The INTAMAP FP6 project has developed an interoperable framework for real-time automatic mapping of critical environmental variables by extending spatial statistical methods and employing open, web-based, data exchange protocols and visualisation tools. This paper will give an overview of the underlying problem, of the project, and discuss which problems it has solved and which open problems seem to be most relevant to deal with next. The interpolation problem that INTAMAP solves is the generic problem of spatial interpolation of environmental variables without user interaction, based on measurements of e.g. PM10, rainfall or gamma dose rate, at arbitrary locations or over a regular grid covering the area of interest. It deals with problems of varying spatial resolution of measurements, the interpolation of averages over larger areas, and with providing information on the interpolation error to the end-user. In addition, monitoring network optimisation is addressed in a non-automatic context.
Resumo:
Wireless sensor networks have been identified as one of the key technologies for the 21st century. They consist of tiny devices with limited processing and power capabilities, called motes that can be deployed in large numbers of useful sensing capabilities. Even though, they are flexible and easy to deploy, there are a number of considerations when it comes to their fault tolerance, conserving energy and re-programmability that need to be addressed before we draw any substantial conclusions about the effectiveness of this technology. In order to overcome their limitations, we propose a middleware solution. The proposed scheme is composed based on two main methods. The first method involves the creation of a flexible communication protocol based on technologies such as Mobile Code/Agents and Linda-like tuple spaces. In this way, every node of the wireless sensor network will produce and process data based on what is the best for it but also for the group that it belongs too. The second method incorporates the above protocol in a middleware that will aim to bridge the gap between the application layer and low level constructs such as the physical layer of the wireless sensor network. A fault tolerant platform for deploying and monitoring applications in real time offers a number of possibilities for the end user giving him in parallel the freedom to experiment with various parameters, in an effort towards the deployed applications running in an energy efficient manner inside the network. The proposed scheme is evaluated through a number of trials aiming to test its merits under real time conditions and to identify its effectiveness against other similar approaches. Finally, parameters which determine the characteristics of the proposed scheme are also examined.
Resumo:
As mobile devices become increasingly diverse and continue to shrink in size and weight, their portability is enhanced but, unfortunately, their usability tends to suffer. Ultimately, the usability of mobile technologies determines their future success in terms of end-user acceptance and, thereafter, adoption and social impact. Widespread acceptance will not, however, be achieved if users’ interaction with mobile technology amounts to a negative experience. Mobile user interfaces need to be designed to meet the functional and sensory needs of users. Social and Organizational Impacts of Emerging Mobile Devices: Evaluating Use focuses on human-computer interaction related to the innovation and research in the design, evaluation, and use of innovative handheld, mobile, and wearable technologies in order to broaden the overall body of knowledge regarding such issues. It aims to provide an international forum for researchers, educators, and practitioners to advance knowledge and practice in all facets of design and evaluation of human interaction with mobile technologies.
Resumo:
Wireless sensor networks have been identified as one of the key technologies for the 21st century. In order to overcome their limitations such as fault tolerance and conservation of energy, we propose a middleware solution, In-Motes. In-Motes stands as a fault tolerant platform for deploying and monitoring applications in real time offers a number of possibilities for the end user giving him in parallel the freedom to experiment with various parameters, in an effort the deployed applications to run in an energy efficient manner inside the network. The proposed scheme is evaluated through the In-Motes EYE application, aiming to test its merits under real time conditions. In-Motes EYE application which is an agent based real time In-Motes application developed for sensing acceleration variations in an environment. The application was tested in a prototype area, road alike, for a period of four months.
Resumo:
The use of spreadsheets has become routine in all aspects of business with usage growing across a range of functional areas and a continuing trend towards end user spreadsheet development. However, several studies have raised concerns about the accuracy of spreadsheet models in general, and of end user developed applications in particular, raising the risk element for users. High error rates have been discovered, even though the users/developers were confident that their spreadsheets were correct. The lack of an easy to use, context-sensitive validation methodology has been highlighted as a significant contributor to the problems of accuracy. This paper describes experiences in using a practical, contingency factor-based methodology for validation of spreadsheet-based DSS. Because the end user is often both the system developer and a stakeholder, the contingency factor-based validation methodology may need to be used in more than one way. The methodology can also be extended to encompass other DSS.
Resumo:
A variety of content-based image retrieval systems exist which enable users to perform image retrieval based on colour content - i.e., colour-based image retrieval. For the production of media for use in television and film, colour-based image retrieval is useful for retrieving specifically coloured animations, graphics or videos from large databases (by comparing user queries to the colour content of extracted key frames). It is also useful to graphic artists creating realistic computer-generated imagery (CGI). Unfortunately, current methods for evaluating colour-based image retrieval systems have 2 major drawbacks. Firstly, the relevance of images retrieved during the task cannot be measured reliably. Secondly, existing methods do not account for the creative design activity known as reflection-in-action. Consequently, the development and application of novel and potentially more effective colour-based image retrieval approaches, better supporting the large number of users creating media for use in television and film productions, is not possible as their efficacy cannot be reliably measured and compared to existing technologies. As a solution to the problem, this paper introduces the Mosaic Test. The Mosaic Test is a user-based evaluation approach in which participants complete an image mosaic of a predetermined target image, using the colour-based image retrieval system that is being evaluated. In this paper, we introduce the Mosaic Test and report on a user evaluation. The findings of the study reveal that the Mosaic Test overcomes the 2 major drawbacks associated with existing evaluation methods and does not require expert participants. © 2012 Springer Science+Business Media, LLC.
Resumo:
In this article we envision factors and trends that shape the next generation of environmental monitoring systems. One key factor in this respect is the combined effect of end-user needs and the general development of IT services and their availability. Currently, an environmental (monitoring) system is assumed to be reactive. It delivers measurement data and computational results only if the user explicitly asks for it either by query or subscription. There is a temptation to automate this by simply pushing data to end-users. This, however, leads easily to an "advertisement strategy", where data is pushed to end-users regardless of users' needs. Under this strategy, the mere amount of received data obfuscates the individual messages; any "automatic" service, regardless of its fitness, overruns a system that requires the user's initiative. The foreseeable problem is that, unless there is no overall management, each new environmental service is going to compete for end-users' attention and, thus, inadvertently hinder the use of existing services. As the main contribution we investigate the nature of proactive environmental systems, and how they should be designed to avoid the aforementioned problem. We also discuss how semantics, participatory sensing, uncertainty management, and situational awareness link to proactive environmental systems. We illustrate our proposals with some real-life examples.
Resumo:
Over recent years, hub-and-spoke distribution techniques have attracted widespread research attention. Despite there being a growing body of literature in this area there is less focus on the spoke-terminal element of the hub-and-spoke system as being a key component in the overall service received by the end-user. Current literature is highly geared towards discussing bulk optimization of freight units rather than to the more discrete and individualistic profile characteristics of shared-user Less-than-truckload (LTL) freight. In this paper, a literature review is presented to review the role hub-and-spoke systems play in meeting multi-profile customer demands, particularly in developing sectors with more sophisticated needs, such as retail. The paper also looks at the use of simulation technology as a suitable tool for analyzing spoke-terminal operations within developing hub-and spoke systems.