861 resultados para information value
Resumo:
Mode of access: Internet.
Resumo:
The management of information in engineering organisations is facing a particular challenge in the ever-increasing volume of information. It has been recognised that an effective methodology is required to evaluate information in order to avoid information overload and to retain the right information for reuse. By using, as a starting point, a number of the current tools and techniques which attempt to obtain ‘the value’ of information, it is proposed that an assessment or filter mechanism for information is needed to be developed. This paper addresses this issue firstly by briefly reviewing the information overload problem, the definition of value, and related research work on the value of information in various areas. Then a “characteristic” based framework of information evaluation is introduced using the key characteristics identified from related work as an example. A Bayesian Network diagram method is introduced to the framework to build the linkage between the characteristics and information value in order to quantitatively calculate the quality and value of information. The training and verification process for the model is then described using 60 real engineering documents as a sample. The model gives a reasonable accurate result and the differences between the model calculation and training judgements are summarised as the potential causes are discussed. Finally, several further issues including the challenge of the framework and the implementations of this evaluation assessment method are raised.
Resumo:
Big data analysis in healthcare sector is still in its early stages when comparing with that of other business sectors due to numerous reasons. Accommodating the volume, velocity and variety of healthcare data Identifying platforms that examine data from multiple sources, such as clinical records, genomic data, financial systems, and administrative systems Electronic Health Record (EHR) is a key information resource for big data analysis and is also composed of varied co-created values. Successful integration and crossing of different subfields of healthcare data such as biomedical informatics and health informatics could lead to huge improvement for the end users of the health care system, i.e. the patients.
Resumo:
In some supply chains, materials are ordered periodically according to local information. This paper investigates how to improve the performance of such a supply chain. Specifically, we consider a serial inventory system in which each stage implements a local reorder interval policy; i.e., each stage orders up to a local basestock level according to a fixed-interval schedule. A fixed cost is incurred for placing an order. Two improvement strategies are considered: (1) expanding the information flow by acquiring real-time demand information and (2) accelerating the material flow via flexible deliveries. The first strategy leads to a reorder interval policy with full information; the second strategy leads to a reorder point policy with local information. Both policies have been studied in the literature. Thus, to assess the benefit of these strategies, we analyze the local reorder interval policy. We develop a bottom-up recursion to evaluate the system cost and provide a method to obtain the optimal policy. A numerical study shows the following: Increasing the flexibility of deliveries lowers costs more than does expanding information flow; the fixed order costs and the system lead times are key drivers that determine the effectiveness of these improvement strategies. In addition, we find that using optimal batch sizes in the reorder point policy and demand rate to infer reorder intervals may lead to significant cost inefficiency. © 2010 INFORMS.
Resumo:
Ocean processes are dynamic, complex, and occur on multiple spatial and temporal scales. To obtain a synoptic view of such processes, ocean scientists collect data over long time periods. Historically, measurements were continually provided by fixed sensors, e.g., moorings, or gathered from ships. Recently, an increase in the utilization of autonomous underwater vehicles has enabled a more dynamic data acquisition approach. However, we still do not utilize the full capabilities of these vehicles. Here we present algorithms that produce persistent monitoring missions for underwater vehicles by balancing path following accuracy and sampling resolution for a given region of interest, which addresses a pressing need among ocean scientists to efficiently and effectively collect high-value data. More specifically, this paper proposes a path planning algorithm and a speed control algorithm for underwater gliders, which together give informative trajectories for the glider to persistently monitor a patch of ocean. We optimize a cost function that blends two competing factors: maximize the information value along the path, while minimizing deviation from the planned path due to ocean currents. Speed is controlled along the planned path by adjusting the pitch angle of the underwater glider, so that higher resolution samples are collected in areas of higher information value. The resulting paths are closed circuits that can be repeatedly traversed to collect long-term ocean data in dynamic environments. The algorithms were tested during sea trials on an underwater glider operating off the coast of southern California, as well as in Monterey Bay, California. The experimental results show significant improvements in data resolution and path reliability compared to previously executed sampling paths used in the respective regions.
Resumo:
Abstract With the Development of the West Regions, the contradiction between economy and geological hazards was once again brought to our face in the Loess Plateau where was ,and is liable to geological hazards for the unique conditions of geology, hydrogeology, geography and meteorology. The goal to realize harmonious development between human and the earth was always there, and landslide hazard zoning provided us an effective way against geological hazards and damage. In the background of the construction of 750KV transformer substation in TianShui, we summarized some theories, methods and development of landslide hazard zoning and discussed the application of information value model in landslide hazard zoning. A called “judgement matrix” like in AHP was introduced to the information value model to solve the key point of landslide hazard zoning — choice of factors and weight of each factor. GIS was applied in the landslide hazard zoning, with its comprehensive function on data management, spatial analysis and mapping. A zonation map of landslide hazard was worked out on MAPGIS aimed to have something of reference and instruction on the construction of the transformer substation. Key words: landslide; hazard zoning; information value model; GIS; judgement matrix
Resumo:
Long-term biological time-series in the oceans are relatively rare. Using the two longest of these we show how the information value of such ecological time-series increases through space and time in terms of their potential policy value. We also explore the co-evolution of these oceanic biological time-series with changing marine management drivers. Lessons learnt from reviewing these sequences of observations provide valuable context for the continuation of existing time-series and perspective for the initiation of new time-series in response to rapid global change. Concluding sections call for a more integrated approach to marine observation systems and highlight the future role of ocean observations in adaptive marine management.
Resumo:
Tese de doutoramento, Geografia (Geografia Física), Universidade de Lisboa, Instituto de Geografia e Ordenamento do Território, 2014
Resumo:
Dans le cadre de cette thèse, nous investiguons la capacité de chaque hémisphère cérébral à utiliser l’information visuelle disponible lors de la reconnaissance de mots. Il est généralement convenu que l’hémisphère gauche (HG) est mieux outillé pour la lecture que l’hémisphère droit (HD). De fait, les mécanismes visuoperceptifs utilisés en reconnaissance de mots se situent principalement dans l’HG (Cohen, Martinaud, Lemer et al., 2003). Puisque les lecteurs normaux utilisent optimalement des fréquences spatiales moyennes (environ 2,5 - 3 cycles par degré d’angle visuel) pour reconnaître les lettres, il est possible que l’HG les traite mieux que l’HD (Fiset, Gosselin, Blais et Arguin, 2006). Par ailleurs, les études portant sur la latéralisation hémisphérique utilisent habituellement un paradigme de présentation en périphérie visuelle. Il a été proposé que l’effet de l’excentricité visuelle sur la reconnaissance de mots soit inégal entre les hémichamps. Notamment, la première lettre est celle qui porte habituellement le plus d’information pour l’identification d’un mot. C’est aussi la plus excentrique lorsque le mot est présenté à l’hémichamp visuel gauche (HVG), ce qui peut nuire à son identification indépendamment des capacités de lecture de l’HD. L’objectif de la première étude est de déterminer le spectre de fréquences spatiales utilisé par l’HG et l’HD en reconnaissance de mots. Celui de la deuxième étude est d’explorer les biais créés par l’excentricité et la valeur informative des lettres lors de présentation en champs divisés. Premièrement, nous découvrons que le spectre de fréquences spatiales utilisé par les deux hémisphères en reconnaissance de mots est globalement similaire, même si l’HG requière moins d’information visuelle que l’HD pour atteindre le même niveau de performance. Étonnament toutefois, l’HD utilise de plus hautes fréquences spatiales pour identifier des mots plus longs. Deuxièmement, lors de présentation à l’HVG, nous trouvons que la 1re lettre, c’est à dire la plus excentrique, est parmi les mieux identifiées même lorsqu’elle a une plus grande valeur informative. Ceci est à l’encontre de l’hypothèse voulant que l’excentricité des lettres exerce un biais négatif pour les mots présentés à l’HVG. De façon intéressante, nos résultats suggèrent la présence d’une stratégie de traitement spécifique au lexique.
Resumo:
O objetivo deste trabalho é verificar se há ou não congruência entre a ideia de lealdade, manifestada no discurso dos bancos, e o seu conceito em marketing de relacionamento, identificando qual o sentido desse constructo nas mensagens das instituições bancárias, além de identificar, também, quais os fatores que levam os clientes a manterem relacionamento duradouro com bancos. O estudo é de caráter exploratório, e foi conduzido com entrevistas individuais com clientes pessoa física e gerentes de bancos públicos e privados de Brasília/DF, com o intuito de explorar o relacionamento entre cliente-banco, para uma melhor orientação na direção das análises dos dados obtidos. Os dados foram coletados junto a 11 entrevistados de ambos os sexos, residentes em Brasília/DF, durante os meses de março e abril de 2011. Para atingir os objetivos propostos, o método de pesquisa adotado foi qualitativo, com foco no valor informacional da mensagem propriamente dita, das palavras, argumentos e idéias nela expressos, utilizando-se uma forma interpretativa para a análise dos dados. Os resultados demonstraram o afastamento conceitual entre a ideia de lealdade dos bancos, definida como algo ligado ao entendimento de que o cliente, confiante em seu banco, está satisfeito e não o deixa, e o seu conceito em marketing de relacionamento, que a define como um profundo compromisso do cliente em recomprar um produto/serviço consistentemente no futuro, pois constatou-se que os clientes bancários, independentemente de fatores como o tempo gasto ou o esforço associados à troca de fornecedores, são sensíveis a aumento substancial da tarifação, não tem compromisso de recompra com banco e nem de compra junto a um único banco. Em decorrência, se pode concluir que não são fiéis/leais. Como fatores responsáveis por relacionamento duradouro com bancos, verificou-se, como principais, a qualidade do atendimento prestado pelo banco e a reciprocidade existente no relacionamento, ambos são constituintes dos sentimentos de satisfação e confiança nos clientes bancários. Conclui-se o trabalho, fazendo-se recomendações com a intenção de beneficiar e desenvolver os gestores deste segmento.
Resumo:
Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.
Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.
Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with
little or no prior knowledge
Resumo:
Companies operating in the wood processing industry need to increase their productivity by implementing automation technologies in their production systems. An increasing global competition and rising raw material prizes challenge their competitiveness. Yet, too extensive automation brings risks such as a deterioration in situation awareness and operator deskilling. The concept of Levels of Automation is generally seen as means to achieve a balanced task allocation between the operators’ skills and competences and the need for automation technology relieving the humans from repetitive or hazardous work activities. The aim of this thesis was to examine to what extent existing methods for assessing Levels of Automation in production processes are applicable in the wood processing industry when focusing on an improved competitiveness of production systems. This was done by answering the following research questions (RQ): RQ1: What method is most appropriate to be applied with measuring Levels of Automation in the wood processing industry? RQ2: How can the measurement of Levels of Automation contribute to an improved competitiveness of the wood processing industry’s production processes? Literature reviews were used to identify the main characteristics of the wood processing industry affecting its automation potential and appropriate assessment methods for Levels of Automation in order to answer RQ1. When selecting the most suitable method, factors like the relevance to the target industry, application complexity or operational level the method is penetrating were important. The DYNAMO++ method, which covers both a rather quantitative technical-physical and a more qualitative social-cognitive dimension, was seen as most appropriate when taking into account these factors. To answer RQ 2, a case study was undertaken at a major Swedish manufacturer of interior wood products to point out paths how the measurement of Levels of Automation contributes to an improved competitiveness of the wood processing industry. The focus was on the task level on shop floor and concrete improvement suggestions were elaborated after applying the measurement method for Levels of Automation. Main aspects considered for generalization were enhancements regarding ergonomics in process design and cognitive support tools for shop-floor personnel through task standardization. Furthermore, difficulties regarding the automation of grading and sorting processes due to the heterogeneous material properties of wood argue for a suitable arrangement of human intervention options in terms of work task allocation. The application of a modified version of DYNAMO++ reveals its pros and cons during a case study which covers a high operator involvement in the improvement process and the distinct predisposition of DYNAMO++ to be applied in an assembly system.
Resumo:
Since the 1960s, the value relevance of accounting information has been an important topic in accounting research. The value relevance research provides evidence as to whether accounting numbers relate to corporate value in a predicted manner (Beaver, 2002). Such research is not only important for investors but also provides useful insights into accounting reporting effectiveness for standard setters and other users. Both the quality of accounting standards used and the effectiveness associated with implementing these standards are fundamental prerequisites for high value relevance (Hellstrom, 2006). However, while the literature comprehensively documents the value relevance of accounting information in developed markets, little attention has been given to emerging markets where the quality of accounting standards and their enforcement are questionable. Moreover, there is currently no known research that explores the association between level of compliance with International Financial Reporting Standards (IFRS) and the value relevance of accounting information. Motivated by the lack of research on the value relevance of accounting information in emerging markets and the unique institutional setting in Kuwait, this study has three objectives. First, it investigates the extent of compliance with IFRS with respect to firms listed on the Kuwait Stock Exchange (KSE). Second, it examines the value relevance of accounting information produced by KSE-listed firms over the 1995 to 2006 period. The third objective links the first two and explores the association between the level of compliance with IFRS and the value relevance of accounting information to market participants. Since it is among the first countries to adopt IFRS, Kuwait provides an ideal setting in which to explore these objectives. In addition, the Kuwaiti accounting environment provides an interesting regulatory context in which each KSE-listed firm is required to appoint at least two external auditors from separate auditing firms. Based on the research objectives, five research questions (RQs) are addressed. RQ1 and RQ2 aim to determine the extent to which KSE-listed firms comply with IFRS and factors contributing to variations in compliance levels. These factors include firm attributes (firm age, leverage, size, profitability, liquidity), the number of brand name (Big-4) auditing firms auditing a firm’s financial statements, and industry categorization. RQ3 and RQ4 address the value relevance of IFRS-based financial statements to investors. RQ5 addresses whether the level of compliance with IFRS contributes to the value relevance of accounting information provided to investors. Based on the potential improvement in value relevance from adopting and complying with IFRS, it is predicted that the higher the level of compliance with IFRS, the greater the value relevance of book values and earnings. The research design of the study consists of two parts. First, in accordance with prior disclosure research, the level of compliance with mandatory IFRS is examined using a disclosure index. Second, the value relevance of financial statement information, specifically, earnings and book value, is examined empirically using two valuation models: price and returns models. The combined empirical evidence that results from the application of both models provides comprehensive insights into value relevance of accounting information in an emerging market setting. Consistent with expectations, the results show the average level of compliance with IFRS mandatory disclosures for all KSE-listed firms in 2006 was 72.6 percent; thus, indicating KSE-listed firms generally did not fully comply with all requirements. Significant variations in the extent of compliance are observed among firms and across accounting standards. As predicted, older, highly leveraged, larger, and profitable KSE-listed firms are more likely to comply with IFRS required disclosures. Interestingly, significant differences in the level of compliance are observed across the three possible auditor combinations of two Big-4, two non-Big 4, and mixed audit firm types. The results for the price and returns models provide evidence that earnings and book values are significant factors in the valuation of KSE-listed firms during the 1995 to 2006 period. However, the results show that the value relevance of earnings and book values decreased significantly during that period, suggesting that investors rely less on financial statements, possibly due to the increase in the available non-financial statement sources. Notwithstanding this decline, a significant association is observed between the level of compliance with IFRS and the value relevance of earnings and book value to KSE investors. The findings make several important contributions. First, they raise concerns about the effectiveness of the regulatory body that oversees compliance with IFRS in Kuwait. Second, they challenge the effectiveness of the two-auditor requirement in promoting compliance with regulations as well as the associated cost-benefit of this requirement for firms. Third, they provide the first known empirical evidence linking the level of IFRS compliance with the value relevance of financial statement information. Finally, the findings are relevant for standard setters and for their current review of KSE regulations. In particular, they highlight the importance of establishing and maintaining adequate monitoring and enforcement mechanisms to ensure compliance with accounting standards. In addition, the finding that stricter compliance with IFRS improves the value relevance of accounting information highlights the importance of full compliance with IFRS and not just mere adoption.
Resumo:
Understanding how IT investments contribute to business value is an important issue, and this assists in the efficient use of technology resources in businesses. While there is an agreement that IT contributes to business value, we are unsure of how IT contributes to business value in the wider context, including developing countries. With the view that understanding the interaction between IT resources and the users may provide better insights on the potential of IT investments, this study investigates the businesses’ perception of the intangible benefits of their IT investments. The results indicate that businesses in developing countries perceive that their IT investments provide intangible benefits, especially at the process level, and this contributes to business value.
Resumo:
Understanding the business value of IT has mostly been studied in developed countries, but because most investment in developing countries is derived from external sources, the influence of that investment on business value is likely to be different. We test this notion using a two-layer model. We examine the impact of IT investments on firm processes, and the relationship of these processes to firm performance in a developing country. Our findings suggest that investment in different areas of IT positively relates to improvements in intermediate business processes and these intermediate business processes positively relate to the overall financial performance of firms in a developing country.