931 resultados para information value
Resumo:
Mode of access: Internet.
Resumo:
The management of information in engineering organisations is facing a particular challenge in the ever-increasing volume of information. It has been recognised that an effective methodology is required to evaluate information in order to avoid information overload and to retain the right information for reuse. By using, as a starting point, a number of the current tools and techniques which attempt to obtain ‘the value’ of information, it is proposed that an assessment or filter mechanism for information is needed to be developed. This paper addresses this issue firstly by briefly reviewing the information overload problem, the definition of value, and related research work on the value of information in various areas. Then a “characteristic” based framework of information evaluation is introduced using the key characteristics identified from related work as an example. A Bayesian Network diagram method is introduced to the framework to build the linkage between the characteristics and information value in order to quantitatively calculate the quality and value of information. The training and verification process for the model is then described using 60 real engineering documents as a sample. The model gives a reasonable accurate result and the differences between the model calculation and training judgements are summarised as the potential causes are discussed. Finally, several further issues including the challenge of the framework and the implementations of this evaluation assessment method are raised.
Resumo:
VALOSADE (Value Added Logistics in Supply and Demand Chains) is the research project of Anita Lukka's VALORE (Value Added Logistics Research) research team inLappeenranta University of Technology. VALOSADE is included in ELO (Ebusiness logistics) technology program of Tekes (Finnish Technology Agency). SMILE (SME-sector, Internet applications and Logistical Efficiency) is one of four subprojects of VALOSADE. SMILE research focuses on case network that is composed of small and medium sized mechanical maintenance service providers and global wood processing customers. Basic principle of SMILE study is communication and ebusiness insupply and demand network. This first phase of research concentrates on creating backgrounds for SMILE study and for ebusiness solutions of maintenance case network. The focus is on general trends of ebusiness in supply chains and networksof different industries; total ebusiness system architecture of company networks; ebusiness strategy of company network; information value chain; different factors, which influence on ebusiness solution of company network; and the correlation between ebusiness and competitive advantage. Literature, interviews and benchmarking were used as research methods in this qualitative case study. Networks and end-to-end supply chains are the organizational structures, which can add value for end customer. Information is one of the key factors in these decentralized structures. Because of decentralization of business, information is produced and used in different companies and in different information systems. Information refinement services are needed to manage information flows in company networksbetween different systems. Furthermore, some new solutions like network information systems are utilised in optimising network performance and in standardizingnetwork common processes. Some cases have however indicated, that utilization of ebusiness in decentralized business model is not always a necessity, but value-add of ICT must be defined case-specifically. In the theory part of report, different ebusiness and architecture models are introduced. These models are compared to empirical case data in research results. The biggest difference between theory and empirical data is that models are mainly developed for large-scale companies - not for SMEs. This is due to that implemented network ebusiness solutions are mainly large company centered. Genuine SME network centred ebusiness models are quite rare, and the study in that area has been few in number. Business relationships between customer and their SME suppliers are nowadays concentrated more on collaborative tactical and strategic initiatives besides transaction based operational initiatives. However, ebusiness systems are further mainly based on exchange of operational transactional data. Collaborative ebusiness solutions are in planning or pilot phase in most case companies. Furthermore, many ebusiness solutions are nowadays between two participants, but network and end-to-end supply chain transparency and information systems are quite rare. Transaction volumes, data formats, the types of exchanged information, information criticality,type and duration of business relationship, internal information systems of partners, processes and operation models (e.g. different ordering models) differ among network companies, and furthermore companies are at different stages on networking and ebusiness readiness. Because of former factors, different customer-supplier combinations in network must utilise totally different ebusiness architectures, technologies, systems and standards.
Resumo:
With increasing technological innovation, the concept of marketing and its applications become more functional and wide. Today, we witness a steady growth in the development of mobile marketing campaigns, i.e., marketing campaigns targeting mobile devices (mobile phones, Smartphones, PDAs, tablets). Among the several mobile technologies available (Bluetooth networks, Wi-Fi, WAP, SMS service, MMS), Bluetooth seems to have the biggest potential for the least invasive consumer mobile marketing strategy. This study seeks to answer the question "what factors may motivate the Portuguese consumer to accept Bluetooth marketing?.“ We propose a conceptual model capable of investigating the relationships between the several responsiveness factors to Bluetooth marketing. The development of a set of hypotheses supported by an online questionnaire to a valid sample of 755 participants, demonstrates that there is a relationship between factors such as expanded knowledge of the technology, and Bluetooth marketing receptivity. Additionally, we find that the information value of mobile advertising messages, such as entertainment value and personalization, relates well to responsiveness. The ability to accept/dismiss promotional messages sent to mobile phones and other safety features also correlated well with Bluetooth marketing receptivity.
Resumo:
With the increasing technological innovation, the concept of marketing and its applications become more functional and wide. Today is visible the development of mobile marketing campaigns, ie marketing campaigns for mobile devices (mobile phones, smartphones, PDAs, tablets). Taking advantage of mobile devices services (bluetooth networks, Wi-Fi, WAP, SMS service, MMS) as a vehicle to approach and communicate with consumers, bluetooth technology is a potential way of mobile marketing to become increasingly less invasive to consumers. This study seeks to answer the question "what factors may motivate the Portuguese consumer to adopt the bluetooth marketing?". According to the literature review on the concept of mobile marketing, bluetooth marketing and consumer behaviour theories, we propose a conceptual model capable of investigating the relationships between the determinants of responsiveness to bluetooth marketing. The empirical study developed from a set of hypotheses and implementation of an online questionnaire to a sample of 755 respondents, demonstrated that there is a relationship between factors such as, technology ease of use, file exchanging and influence of peers, and the receptivity to bluetooth marketing. Also information value of mobile advertising messages, such as entertainment and personalization relates to responsiveness. The consumer’s perceived control over mobile promotional messages and the safety features of the technology, also showed a positive relationship with the receptivity to bluetooth marketing.
Resumo:
Dans le cadre de cette thèse, nous investiguons la capacité de chaque hémisphère cérébral à utiliser l’information visuelle disponible lors de la reconnaissance de mots. Il est généralement convenu que l’hémisphère gauche (HG) est mieux outillé pour la lecture que l’hémisphère droit (HD). De fait, les mécanismes visuoperceptifs utilisés en reconnaissance de mots se situent principalement dans l’HG (Cohen, Martinaud, Lemer et al., 2003). Puisque les lecteurs normaux utilisent optimalement des fréquences spatiales moyennes (environ 2,5 - 3 cycles par degré d’angle visuel) pour reconnaître les lettres, il est possible que l’HG les traite mieux que l’HD (Fiset, Gosselin, Blais et Arguin, 2006). Par ailleurs, les études portant sur la latéralisation hémisphérique utilisent habituellement un paradigme de présentation en périphérie visuelle. Il a été proposé que l’effet de l’excentricité visuelle sur la reconnaissance de mots soit inégal entre les hémichamps. Notamment, la première lettre est celle qui porte habituellement le plus d’information pour l’identification d’un mot. C’est aussi la plus excentrique lorsque le mot est présenté à l’hémichamp visuel gauche (HVG), ce qui peut nuire à son identification indépendamment des capacités de lecture de l’HD. L’objectif de la première étude est de déterminer le spectre de fréquences spatiales utilisé par l’HG et l’HD en reconnaissance de mots. Celui de la deuxième étude est d’explorer les biais créés par l’excentricité et la valeur informative des lettres lors de présentation en champs divisés. Premièrement, nous découvrons que le spectre de fréquences spatiales utilisé par les deux hémisphères en reconnaissance de mots est globalement similaire, même si l’HG requière moins d’information visuelle que l’HD pour atteindre le même niveau de performance. Étonnament toutefois, l’HD utilise de plus hautes fréquences spatiales pour identifier des mots plus longs. Deuxièmement, lors de présentation à l’HVG, nous trouvons que la 1re lettre, c’est à dire la plus excentrique, est parmi les mieux identifiées même lorsqu’elle a une plus grande valeur informative. Ceci est à l’encontre de l’hypothèse voulant que l’excentricité des lettres exerce un biais négatif pour les mots présentés à l’HVG. De façon intéressante, nos résultats suggèrent la présence d’une stratégie de traitement spécifique au lexique.
Resumo:
O objetivo deste trabalho é verificar se há ou não congruência entre a ideia de lealdade, manifestada no discurso dos bancos, e o seu conceito em marketing de relacionamento, identificando qual o sentido desse constructo nas mensagens das instituições bancárias, além de identificar, também, quais os fatores que levam os clientes a manterem relacionamento duradouro com bancos. O estudo é de caráter exploratório, e foi conduzido com entrevistas individuais com clientes pessoa física e gerentes de bancos públicos e privados de Brasília/DF, com o intuito de explorar o relacionamento entre cliente-banco, para uma melhor orientação na direção das análises dos dados obtidos. Os dados foram coletados junto a 11 entrevistados de ambos os sexos, residentes em Brasília/DF, durante os meses de março e abril de 2011. Para atingir os objetivos propostos, o método de pesquisa adotado foi qualitativo, com foco no valor informacional da mensagem propriamente dita, das palavras, argumentos e idéias nela expressos, utilizando-se uma forma interpretativa para a análise dos dados. Os resultados demonstraram o afastamento conceitual entre a ideia de lealdade dos bancos, definida como algo ligado ao entendimento de que o cliente, confiante em seu banco, está satisfeito e não o deixa, e o seu conceito em marketing de relacionamento, que a define como um profundo compromisso do cliente em recomprar um produto/serviço consistentemente no futuro, pois constatou-se que os clientes bancários, independentemente de fatores como o tempo gasto ou o esforço associados à troca de fornecedores, são sensíveis a aumento substancial da tarifação, não tem compromisso de recompra com banco e nem de compra junto a um único banco. Em decorrência, se pode concluir que não são fiéis/leais. Como fatores responsáveis por relacionamento duradouro com bancos, verificou-se, como principais, a qualidade do atendimento prestado pelo banco e a reciprocidade existente no relacionamento, ambos são constituintes dos sentimentos de satisfação e confiança nos clientes bancários. Conclui-se o trabalho, fazendo-se recomendações com a intenção de beneficiar e desenvolver os gestores deste segmento.
Resumo:
Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.
Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.
Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with
little or no prior knowledge
Resumo:
Companies operating in the wood processing industry need to increase their productivity by implementing automation technologies in their production systems. An increasing global competition and rising raw material prizes challenge their competitiveness. Yet, too extensive automation brings risks such as a deterioration in situation awareness and operator deskilling. The concept of Levels of Automation is generally seen as means to achieve a balanced task allocation between the operators’ skills and competences and the need for automation technology relieving the humans from repetitive or hazardous work activities. The aim of this thesis was to examine to what extent existing methods for assessing Levels of Automation in production processes are applicable in the wood processing industry when focusing on an improved competitiveness of production systems. This was done by answering the following research questions (RQ): RQ1: What method is most appropriate to be applied with measuring Levels of Automation in the wood processing industry? RQ2: How can the measurement of Levels of Automation contribute to an improved competitiveness of the wood processing industry’s production processes? Literature reviews were used to identify the main characteristics of the wood processing industry affecting its automation potential and appropriate assessment methods for Levels of Automation in order to answer RQ1. When selecting the most suitable method, factors like the relevance to the target industry, application complexity or operational level the method is penetrating were important. The DYNAMO++ method, which covers both a rather quantitative technical-physical and a more qualitative social-cognitive dimension, was seen as most appropriate when taking into account these factors. To answer RQ 2, a case study was undertaken at a major Swedish manufacturer of interior wood products to point out paths how the measurement of Levels of Automation contributes to an improved competitiveness of the wood processing industry. The focus was on the task level on shop floor and concrete improvement suggestions were elaborated after applying the measurement method for Levels of Automation. Main aspects considered for generalization were enhancements regarding ergonomics in process design and cognitive support tools for shop-floor personnel through task standardization. Furthermore, difficulties regarding the automation of grading and sorting processes due to the heterogeneous material properties of wood argue for a suitable arrangement of human intervention options in terms of work task allocation. The application of a modified version of DYNAMO++ reveals its pros and cons during a case study which covers a high operator involvement in the improvement process and the distinct predisposition of DYNAMO++ to be applied in an assembly system.
Resumo:
The nutritional composition found in the laboratory and those present on labels of manufactured foods can differ significantly. The purpose of this study was to determine the nutritional composition of hamburgers and meatballs and compare them with your labels. The food analysis was performed following the Analytical Standards Institute`s Adolfo Lutz and energy content was determined by bomb calorimetry. Regarding the energy value, all the samples had values less than informed on the label. The content of lipids of hamburgers and meatballs ( except the beef) were lower than those reported on the label. The values of protein for the meatballs and chicken hamburger had lower values than those labels. Thus, the labels may overestimate as underestimate some nutritional values, leading to population erroneous information.
Resumo:
Information systems are widespread and used by anyone with computing devices as well as corporations and governments. It is often the case that security leaks are introduced during the development of an application. Reasons for these security bugs are multiple but among them one can easily identify that it is very hard to define and enforce relevant security policies in modern software. This is because modern applications often rely on container sharing and multi-tenancy where, for instance, data can be stored in the same physical space but is logically mapped into different security compartments or data structures. In turn, these security compartments, to which data is classified into in security policies, can also be dynamic and depend on runtime data. In this thesis we introduce and develop the novel notion of dependent information flow types, and focus on the problem of ensuring data confidentiality in data-centric software. Dependent information flow types fit within the standard framework of dependent type theory, but, unlike usual dependent types, crucially allow the security level of a type, rather than just the structural data type itself, to depend on runtime values. Our dependent function and dependent sum information flow types provide a direct, natural and elegant way to express and enforce fine grained security policies on programs. Namely programs that manipulate structured data types in which the security level of a structure field may depend on values dynamically stored in other fields The main contribution of this work is an efficient analysis that allows programmers to verify, during the development phase, whether programs have information leaks, that is, it verifies whether programs protect the confidentiality of the information they manipulate. As such, we also implemented a prototype typechecker that can be found at http://ctp.di.fct.unl.pt/DIFTprototype/.
Resumo:
This paper investigates the role of learning by private agents and the central bank (two-sided learning) in a New Keynesian framework in which both sides of the economy have asymmetric and imperfect knowledge about the true data generating process. We assume that all agents employ the data that they observe (which may be distinct for different sets of agents) to form beliefs about unknown aspects of the true model of the economy, use their beliefs to decide on actions, and revise these beliefs through a statistical learning algorithm as new information becomes available. We study the short-run dynamics of our model and derive its policy recommendations, particularly with respect to central bank communications. We demonstrate that two-sided learning can generate substantial increases in volatility and persistence, and alter the behavior of the variables in the model in a signifficant way. Our simulations do not converge to a symmetric rational expectations equilibrium and we highlight one source that invalidates the convergence results of Marcet and Sargent (1989). Finally, we identify a novel aspect of central bank communication in models of learning: communication can be harmful if the central bank's model is substantially mis-specified
Resumo:
There are many situations in which individuals have a choice of whether or notto observe eventual outcomes. In these instances, individuals often prefer to remainignorant. These contexts are outside the scope of analysis of the standard vonNeumann-Morgenstern (vNM) expected utility model, which does not distinguishbetween lotteries for which the agent sees the final outcome and those for which hedoes not. I develop a simple model that admits preferences for making an observationor for remaining in doubt. I then use this model to analyze the connectionbetween preferences of this nature and risk-attitude. This framework accommodatesa wide array of behavioral patterns that violate the vNM model, and thatmay not seem related, prima facie. For instance, it admits self-handicapping, inwhich an agent chooses to impair his own performance. It also accommodatesa status quo bias without having recourse to framing effects, or to an explicitdefinition of reference points. In a political economy context, voters have strictincentives to shield themselves from information. In settings with other-regardingpreferences, this model predicts observed behavior that seems inconsistent witheither altruism or self-interested behavior.
Resumo:
This paper investigates the role of learning by private agents and the central bank(two-sided learning) in a New Keynesian framework in which both sides of the economyhave asymmetric and imperfect knowledge about the true data generating process. Weassume that all agents employ the data that they observe (which may be distinct fordifferent sets of agents) to form beliefs about unknown aspects of the true model ofthe economy, use their beliefs to decide on actions, and revise these beliefs througha statistical learning algorithm as new information becomes available. We study theshort-run dynamics of our model and derive its policy recommendations, particularlywith respect to central bank communications. We demonstrate that two-sided learningcan generate substantial increases in volatility and persistence, and alter the behaviorof the variables in the model in a significant way. Our simulations do not convergeto a symmetric rational expectations equilibrium and we highlight one source thatinvalidates the convergence results of Marcet and Sargent (1989). Finally, we identifya novel aspect of central bank communication in models of learning: communicationcan be harmful if the central bank's model is substantially mis-specified.