932 resultados para Knowledge Information Objects
Resumo:
Aquesta tesi forma part d'un projecte destinat a predir el rendiment acadèmic dels estudiants de doctorat portat a terme per l'INSOC (International Network on Social Capital and Performance). El grup de recerca INSOC està format per les universitats de Girona (Espanya), Ljubljana (Eslovènia), Giessen (Alemanya) i Ghent (Bèlgica). El primer objectiu d'aquesta tesi és desenvolupar anàlisis quantitatius comparatius sobre el rendiment acadèmic dels estudiants de doctorat entre Espanya, Eslovènia i Alemanya a partir dels resultats individuals del rendiment acadèmic obtinguts de cada una de les universitats. La naturalesa internacional del grup de recerca implica la recerca comparativa. Vam utilitzar variables personal, actitudinals i de xarxa per predir el rendiment. El segon objectiu d'aquesta tesi és entendre de manera qualitativa perquè les variables de xarxa no ajuden quantitativament a predir el rendiment a la universitat de Girona (Espanya). En el capítol 1, definim conceptes relacionats amb el rendiment i donam un llistat de cada una de les variables independents (variables de xarxa, personals i actitudinals), resumint la lliteratura. Finalment, explicam com s'organitzen els estudis de doctorat a cada un dels diferents països. A partir d'aquestes definicions teòriques, en els pròxims capítols, primer presentarem els qüestionaris utilitzats a Espanya, Eslovènia i Alemanya per mesurar aquests diferents tipus de variables. Després, compararem les variables que són relevants per predir el rendiment dels estudiants de doctorat a cada país. Després d'això, fixarem diferents models de regressió per predir el rendiment entre països. En tots aquests models les variables de xarxa fallen a predir el rendiment a la Universitat de Girona. Finalment, utilitzem estudis qualitatius per entendre aquests resultats inesperats. En el capítol 2, expliquem com hem dissenyat i conduït els qüestionaris en els diferents països amb l'objectiu d'explicar el rendiment dels estudiants de doctorat obtinguts a Espanya, Eslovènia i Alemanya. En el capítol 3, cream indicadors comparables però apareixen problemes de comparabilitat en preguntes particulars a Espanya, Eslovènia i Alemanya. En aquest capítol expliquem com utilitzem les variables dels tres països per crear indicadors comparables. Aquest pas és molt important perquè el principal objectiu del grup de recerca INSOC és comparar el rendiment dels estudiants de doctorat entre els diferents països. En el capítol 4 comparem models de regressió obtinguts de predir el rendiment dels estudiants de doctorat a les universitats de Girona (Espanya) i Eslovènia. Les variables són característiques dels grups de recerca dels estudiants de doctorat enteses com una xarxa social egocèntrica, característiques personals i actitudinals dels estudiants de doctorat i algunes carecterístiques dels directors. Vam trobar que les variables de xarxa egocèntriques no predien el rendiment a la Universitat de Girona. En el capítol 5, comparem dades eslovenes, espanyoles i alemnayes, seguint la metodologia del capítol 4. Concluïm que el cas alemany és molt diferent. El poder predictiu de les variables de xarxa no millora. En el capítol 6 el grup de recerca dels estudiants de doctorat és entès com una xarxa duocèntrica (Coromina et al., 2008), amb l'objectiu d'obtendre informació de la relació mútua entre els estudiants i els seus directors i els contactes d'ambdós amb els altres de la xarxa. La inclusió de la xarxa duocèntrica no millora el poder predictiu del model de regressió utilitzant les variales egocèntriques de xarxa. El capítol 7 pretèn entendre perquè les variables de xarxa no predeixen el rendiment a la Universitat de Girona. Utilitzem el mètode mixte, esperant que l'estudi qualitatiu pugui cobrir les raons de perquè la qualitat de la xarxa falla en la qualitat del treball dels estudiants. Per recollir dades per l'estudi qualitatiu utilitzem entrevistes en profunditat.
Resumo:
The human visual ability to perceive depth looks like a puzzle. We perceive three-dimensional spatial information quickly and efficiently by using the binocular stereopsis of our eyes and, what is mote important the learning of the most common objects which we achieved through living. Nowadays, modelling the behaviour of our brain is a fiction, that is why the huge problem of 3D perception and further, interpretation is split into a sequence of easier problems. A lot of research is involved in robot vision in order to obtain 3D information of the surrounded scene. Most of this research is based on modelling the stereopsis of humans by using two cameras as if they were two eyes. This method is known as stereo vision and has been widely studied in the past and is being studied at present, and a lot of work will be surely done in the future. This fact allows us to affirm that this topic is one of the most interesting ones in computer vision. The stereo vision principle is based on obtaining the three dimensional position of an object point from the position of its projective points in both camera image planes. However, before inferring 3D information, the mathematical models of both cameras have to be known. This step is known as camera calibration and is broadly describes in the thesis. Perhaps the most important problem in stereo vision is the determination of the pair of homologue points in the two images, known as the correspondence problem, and it is also one of the most difficult problems to be solved which is currently investigated by a lot of researchers. The epipolar geometry allows us to reduce the correspondence problem. An approach to the epipolar geometry is describes in the thesis. Nevertheless, it does not solve it at all as a lot of considerations have to be taken into account. As an example we have to consider points without correspondence due to a surface occlusion or simply due to a projection out of the camera scope. The interest of the thesis is focused on structured light which has been considered as one of the most frequently used techniques in order to reduce the problems related lo stereo vision. Structured light is based on the relationship between a projected light pattern its projection and an image sensor. The deformations between the pattern projected into the scene and the one captured by the camera, permits to obtain three dimensional information of the illuminated scene. This technique has been widely used in such applications as: 3D object reconstruction, robot navigation, quality control, and so on. Although the projection of regular patterns solve the problem of points without match, it does not solve the problem of multiple matching, which leads us to use hard computing algorithms in order to search the correct matches. In recent years, another structured light technique has increased in importance. This technique is based on the codification of the light projected on the scene in order to be used as a tool to obtain an unique match. Each token of light is imaged by the camera, we have to read the label (decode the pattern) in order to solve the correspondence problem. The advantages and disadvantages of stereo vision against structured light and a survey on coded structured light are related and discussed. The work carried out in the frame of this thesis has permitted to present a new coded structured light pattern which solves the correspondence problem uniquely and robust. Unique, as each token of light is coded by a different word which removes the problem of multiple matching. Robust, since the pattern has been coded using the position of each token of light with respect to both co-ordinate axis. Algorithms and experimental results are included in the thesis. The reader can see examples 3D measurement of static objects, and the more complicated measurement of moving objects. The technique can be used in both cases as the pattern is coded by a single projection shot. Then it can be used in several applications of robot vision. Our interest is focused on the mathematical study of the camera and pattern projector models. We are also interested in how these models can be obtained by calibration, and how they can be used to obtained three dimensional information from two correspondence points. Furthermore, we have studied structured light and coded structured light, and we have presented a new coded structured light pattern. However, in this thesis we started from the assumption that the correspondence points could be well-segmented from the captured image. Computer vision constitutes a huge problem and a lot of work is being done at all levels of human vision modelling, starting from a)image acquisition; b) further image enhancement, filtering and processing, c) image segmentation which involves thresholding, thinning, contour detection, texture and colour analysis, and so on. The interest of this thesis starts in the next step, usually known as depth perception or 3D measurement.
Resumo:
El treball desenvolupat en aquesta tesi presenta un profund estudi i proveïx solucions innovadores en el camp dels sistemes recomanadors. Els mètodes que usen aquests sistemes per a realitzar les recomanacions, mètodes com el Filtrat Basat en Continguts (FBC), el Filtrat Col·laboratiu (FC) i el Filtrat Basat en Coneixement (FBC), requereixen informació dels usuaris per a predir les preferències per certs productes. Aquesta informació pot ser demogràfica (Gènere, edat, adreça, etc), o avaluacions donades sobre algun producte que van comprar en el passat o informació sobre els seus interessos. Existeixen dues formes d'obtenir aquesta informació: els usuaris ofereixen explícitament aquesta informació o el sistema pot adquirir la informació implícita disponible en les transaccions o historial de recerca dels usuaris. Per exemple, el sistema recomanador de pel·lícules MovieLens (http://movielens.umn.edu/login) demana als usuaris que avaluïn almenys 15 pel·lícules dintre d'una escala de * a * * * * * (horrible, ...., ha de ser vista). El sistema genera recomanacions sobre la base d'aquestes avaluacions. Quan els usuaris no estan registrat en el sistema i aquest no té informació d'ells, alguns sistemes realitzen les recomanacions tenint en compte l'historial de navegació. Amazon.com (http://www.amazon.com) realitza les recomanacions tenint en compte les recerques que un usuari a fet o recomana el producte més venut. No obstant això, aquests sistemes pateixen de certa falta d'informació. Aquest problema és generalment resolt amb l'adquisició d'informació addicional, se li pregunta als usuaris sobre els seus interessos o es cerca aquesta informació en fonts addicionals. La solució proposada en aquesta tesi és buscar aquesta informació en diverses fonts, específicament aquelles que contenen informació implícita sobre les preferències dels usuaris. Aquestes fonts poden ser estructurades com les bases de dades amb informació de compres o poden ser no estructurades com les pàgines web on els usuaris deixen la seva opinió sobre algun producte que van comprar o posseïxen. Nosaltres trobem tres problemes fonamentals per a aconseguir aquest objectiu: 1 . La identificació de fonts amb informació idònia per als sistemes recomanadors. 2 . La definició de criteris que permetin la comparança i selecció de les fonts més idònies. 3 . La recuperació d'informació de fonts no estructurades. En aquest sentit, en la tesi proposada s'ha desenvolupat: 1 . Una metodologia que permet la identificació i selecció de les fonts més idònies. Criteris basats en les característiques de les fonts i una mesura de confiança han estat utilitzats per a resoldre el problema de la identificació i selecció de les fonts. 2 . Un mecanisme per a recuperar la informació no estructurada dels usuaris disponible en la web. Tècniques de Text Mining i ontologies s'han utilitzat per a extreure informació i estructurar-la apropiadament perquè la utilitzin els recomanadors. Les contribucions del treball desenvolupat en aquesta tesi doctoral són: 1. Definició d'un conjunt de característiques per a classificar fonts rellevants per als sistemes recomanadors 2. Desenvolupament d'una mesura de rellevància de les fonts calculada sobre la base de les característiques definides 3. Aplicació d'una mesura de confiança per a obtenir les fonts més fiables. La confiança es definida des de la perspectiva de millora de la recomanació, una font fiable és aquella que permet millorar les recomanacions. 4. Desenvolupament d'un algorisme per a seleccionar, des d'un conjunt de fonts possibles, les més rellevants i fiable utilitzant les mitjanes esmentades en els punts previs. 5. Definició d'una ontologia per a estructurar la informació sobre les preferències dels usuaris que estan disponibles en Internet. 6. Creació d'un procés de mapatge que extreu automàticament informació de les preferències dels usuaris disponibles en la web i posa aquesta informació dintre de l'ontologia. Aquestes contribucions permeten aconseguir dos objectius importants: 1 . Millorament de les recomanacions usant fonts d'informació alternatives que sigui rellevants i fiables. 2 . Obtenir informació implícita dels usuaris disponible en Internet.
Resumo:
Eye tracking has become a preponderant technique in the evaluation of user interaction and behaviour with study objects in defined contexts. Common eye tracking related data representation techniques offer valuable input regarding user interaction and eye gaze behaviour, namely through fixations and saccades measurement. However, these and other techniques may be insufficient for the representation of acquired data in specific studies, namely because of the complexity of the study object being analysed. This paper intends to contribute with a summary of data representation and information visualization techniques used in data analysis within different contexts (advertising, websites, television news and video games). Additionally, several methodological approaches are presented in this paper, which resulted from several studies developed and under development at CETAC.MEDIA - Communication Sciences and Technologies Research Centre. In the studies described, traditional data representation techniques were insufficient. As a result, new approaches were necessary and therefore, new forms of representing data, based on common techniques were developed with the objective of improving communication and information strategies. In each of these studies, a brief summary of the contribution to their respective area will be presented, as well as the data representation techniques used and some of the acquired results.
Resumo:
Este relatório de estágio representa o trabalho desenvolvido na Autoritat Portuària de Barcelona (APB), mais precisamente no Centro de Documentação da Autoridade Portuária de Barcelona, num período de 150 horas, onde tive a oportunidade de passar pelos diferentes serviços de documentação e realizar as tarefas inerentes a uma Instituição com idêntica tipologia àquela onde presto idêntica atividade em Portugal. Descreve-se a empresa e o trabalho desenvolvido no Centro de Documentação (CENDOC), considerando todas as funções desenvolvidas ao nível da gestão documental, na biblioteca, no arquivo intermédio, histórico e no arquivo fotográfico. O serviço de Arquivo agiliza a gestão de um fundo documental com mais de 3.900 metros lineares de documentos textuais, 500 metros lineares de documentação gráfica e cartográfica e 75.000 fotografias. Também gere o Património Cultural Móvel da APB, tanto o fundo documental do Arquivo Histórico (textual e imagens), como as coleções de objetos artísticos de interesse histórico e cultural (pinturas, esculturas, artes decorativas, cartas náuticas, modelos de navios). Toda a documentação do Arquivo Intermédio foi devidamente tratada, higienizada e organizada num novo espaço de arquivo, com melhores condições de acondicionamento. Aproveitando este trabalho, foi desenvolvido um novo Plano de Classificação em maio de 2012, no mesmo período que desenvolvi o estágio, a fim de melhorar o serviço de Arquivo. O serviço da Biblioteca tem ao alcance dos seus utilizadores um fundo de 1276 publicações periódicas e mais de 2.300 monografias, catalogadas no programa informático CDS/ISIS. A sua classificação é feita com base na Classificação Decimal Universal (CDU), e a partir de um tesauro especifico elaborado pelos técnicos do CENDOC. Enquanto Técnica Superior no Centro de Documentação e Informação na Administração do Porto de Lisboa (APL), o desenvolvimento deste estágio trouxe um importante contributo para o serviço que desempenho no Centro de Documentação e Informação na Administração do Porto de Lisboa. Este Estágio possibilitou, sem dúvida, um melhor conhecimento teórico e prático no âmbito das tarefas inerentes ao mesmo, e a capacitação para o desenvolvimento de projetos relacionados com as funções que desempenho, no Centro de Documentação e Informação da Administração do Porto de Lisboa.
Preservar e desenvolver em museologia, contributo para o estudo do objecto e do processo museológico
Resumo:
Este trabalho procurou uma resposta para a aparente contradição entre os actos de preservar e de desenvolver no trabalho museológico. E desejava, com essa resposta, obter uma compreensão mais profunda sobre a Museologia. Utilizando a metodologia de investigação “Grounded Theory” (Glaser & Strauss, 1967; Ellen, 1992; Mark, 1996; Marshall & Rossman, 1999) adoptou a definição de museu dos Estatutos do ICOM (2001) como ponto de partida conceptual para o desenrolar da pesquisa. A - Com o esforço necessário à obtenção da resposta inicial o trabalho pôde alcançar os seguintes resultados: i) Discerniu as fases e a racionalidade do processo museológico, através do qual os objectos adquirem a “identidade patrimonial”. ii) Formulou o conceito de “objecto museológico” numa acepção distinta do de Património ou de “objecto patrimonial”, permitindo confirmar que a contradição formulada na hipótese inicial só poderia desaparecer, ou ser conciliada, num paradigma de trabalho museológico concebido como um acto de comunicação. iii) Propôs, em consequência, um diferente Programa para a orientação do trabalho museológico, demonstrando que garantiria ao património uma maior perenidade e transmissibilidade, sendo ainda capaz de incluir o património referente à materialidade, à iconicidade, à oralidade e à gestualidade dos objectos. iv) Propôs um Léxico de Conceitos capaz de justificar essas novas propostas. v) Sugeriu um índice de desenvolvimento museal (IDM = Σ ƒξ [IP.ID.IC] / CT.CR) para ser possível avaliar e quantificar o trabalho museológico. B – Para o objectivo de uma compreensão mais profunda da Museologia o trabalho alcançaria os seguintes resultados: vi) Verificou a necessidade de se dominarem competências de Gestão, para o trabalho museológico não se restringir apenas a um tipo de colecções ou de património. vii) Sugeriu, para ser possível continuar a investigar a Museologia como um novo ramo ou disciplina do saber, a necessidade estratégica de a ligar ao estudo mais vasto da Memória, apontando dois caminhos: Por um lado, considerar a herança filogenética dos “modos de guardar informações” entre os diferentes organismos e sistemas (Lecointre & Le Guyader, 2001). Por outro lado, considerar os constrangimentos ocorridos durante a ontogenia e a maturação individual que obrigam a ter em consideração, no processamento da memória e do património (codificação, armazenamento, evocação e recuperação, esquecimento), a biologia molecular da cognição (Squire & Kandel, 2002).
Resumo:
Seventeen-month-old infants were presented with pairs of images, in silence or with the non-directive auditory stimulus 'look!'. The images had been chosen so that one image depicted an item whose name was known to the infant, and the other image depicted an image whose name was not known to the infant. Infants looked longer at images for which they had names than at images for which they did not have names, despite the absence of any referential input. The experiment controlled for the familiarity of the objects depicted: in each trial, image pairs presented to infants had previously been judged by caregivers to be of roughly equal familiarity. From a theoretical perspective, the results indicate that objects with names are of intrinsic interest to the infant. The possible causal direction for this linkage is discussed and it is concluded that the results are consistent with Whorfian linguistic determinism, although other construals are possible. From a methodological perspective, the results have implications for the use of preferential looking as an index of early word comprehension.
Resumo:
Mainframes, corporate and central servers are becoming information servers. The requirement for more powerful information servers is the best opportunity to exploit the potential of parallelism. ICL recognized the opportunity of the 'knowledge spectrum' namely to convert raw data into information and then into high grade knowledge. Parallel Processing and Data Management Its response to this and to the underlying search problems was to introduce the CAFS retrieval engine. The CAFS product demonstrates that it is possible to move functionality within an established architecture, introduce a different technology mix and exploit parallelism to achieve radically new levels of performance. CAFS also demonstrates the benefit of achieving this transparently behind existing interfaces. ICL is now working with Bull and Siemens to develop the information servers of the future by exploiting new technologies as available. The objective of the joint Esprit II European Declarative System project is to develop a smoothly scalable, highly parallel computer system, EDS. EDS will in the main be an SQL server and an information server. It will support the many data-intensive applications which the companies foresee; it will also support application-intensive and logic-intensive systems.
Resumo:
This paper describes the user modeling component of EPIAIM, a consultation system for data analysis in epidemiology. The component is aimed at representing knowledge of concepts in the domain, so that their explanations can be adapted to user needs. The first part of the paper describes two studies aimed at analysing user requirements. The first one is a questionnaire study which examines the respondents' familiarity with concepts. The second one is an analysis of concept descriptions in textbooks and from expert epidemiologists, which examines how discourse strategies are tailored to the level of experience of the expected audience. The second part of the paper describes how the results of these studies have been used to design the user modeling component of EPIAIM. This module works in a two-step approach. In the first step, a few trigger questions allow the activation of a stereotype that includes a "body" and an "inference component". The body is the representation of the body of knowledge that a class of users is expected to know, along with the probability that the knowledge is known. In the inference component, the learning process of concepts is represented as a belief network. Hence, in the second step the belief network is used to refine the initial default information in the stereotype's body. This is done by asking a few questions on those concepts where it is uncertain whether or not they are known to the user, and propagating this new evidence to revise the whole situation. The system has been implemented on a workstation under UNIX. An example of functioning is presented, and advantages and limitations of the approach are discussed.
Resumo:
Context: Learning can be regarded as knowledge construction in which prior knowledge and experience serve as basis for the learners to expand their knowledge base. Such a process of knowledge construction has to take place continuously in order to enhance the learners’ competence in a competitive working environment. As the information consumers, the individual users demand personalised information provision which meets their own specific purposes, goals, and expectations. Objectives: The current methods in requirements engineering are capable of modelling the common user’s behaviour in the domain of knowledge construction. The users’ requirements can be represented as a case in the defined structure which can be reasoned to enable the requirements analysis. Such analysis needs to be enhanced so that personalised information provision can be tackled and modelled. However, there is a lack of suitable modelling methods to achieve this end. This paper presents a new ontological method for capturing individual user’s requirements and transforming the requirements onto personalised information provision specifications. Hence the right information can be provided to the right user for the right purpose. Method: An experiment was conducted based on the qualitative method. A medium size of group of users participated to validate the method and its techniques, i.e. articulates, maps, configures, and learning content. The results were used as the feedback for the improvement. Result: The research work has produced an ontology model with a set of techniques which support the functions for profiling user’s requirements, reasoning requirements patterns, generating workflow from norms, and formulating information provision specifications. Conclusion: The current requirements engineering approaches provide the methodical capability for developing solutions. Our research outcome, i.e. the ontology model with the techniques, can further enhance the RE approaches for modelling the individual user’s needs and discovering the user’s requirements.
Resumo:
More data will be produced in the next five years than in the entire history of human kind, a digital deluge that marks the beginning of the Century of Information. Through a year-long consultation with UK researchers, a coherent strategy has been developed, which will nurture Century-of-Information Research (CIR); it crystallises the ideas developed by the e-Science Directors' Forum Strategy Working Group. This paper is an abridged version of their latest report which can be found at: http://wikis.nesc.ac.uk/escienvoy/Century_of_Information_Research_Strategy which also records the consultation process and the affiliations of the authors. This document is derived from a paper presented at the Oxford e-Research Conference 2008 and takes into account suggestions made in the ensuing panel discussion. The goals of the CIR Strategy are to facilitate the growth of UK research and innovation that is data and computationally intensive and to develop a new culture of 'digital-systems judgement' that will equip research communities, businesses, government and society as a whole, with the skills essential to compete and prosper in the Century of Information. The CIR Strategy identifies a national requirement for a balanced programme of coordination, research, infrastructure, translational investment and education to empower UK researchers, industry, government and society. The Strategy is designed to deliver an environment which meets the needs of UK researchers so that they can respond agilely to challenges, can create knowledge and skills, and can lead new kinds of research. It is a call to action for those engaged in research, those providing data and computational facilities, those governing research and those shaping education policies. The ultimate aim is to help researchers strengthen the international competitiveness of the UK research base and increase its contribution to the economy. The objectives of the Strategy are to better enable UK researchers across all disciplines to contribute world-leading fundamental research; to accelerate the translation of research into practice; and to develop improved capabilities, facilities and context for research and innovation. It envisages a culture that is better able to grasp the opportunities provided by the growing wealth of digital information. Computing has, of course, already become a fundamental tool in all research disciplines. The UK e-Science programme (2001-06)—since emulated internationally—pioneered the invention and use of new research methods, and a new wave of innovations in digital-information technologies which have enabled them. The Strategy argues that the UK must now harness and leverage its own, plus the now global, investment in digital-information technology in order to spread the benefits as widely as possible in research, education, industry and government. Implementing the Strategy would deliver the computational infrastructure and its benefits as envisaged in the Science & Innovation Investment Framework 2004-2014 (July 2004), and in the reports developing those proposals. To achieve this, the Strategy proposes the following actions: support the continuous innovation of digital-information research methods; provide easily used, pervasive and sustained e-Infrastructure for all research; enlarge the productive research community which exploits the new methods efficiently; generate capacity, propagate knowledge and develop skills via new curricula; and develop coordination mechanisms to improve the opportunities for interdisciplinary research and to make digital-infrastructure provision more cost effective. To gain the best value for money strategic coordination is required across a broad spectrum of stakeholders. A coherent strategy is essential in order to establish and sustain the UK as an international leader of well-curated national data assets and computational infrastructure, which is expertly used to shape policy, support decisions, empower researchers and to roll out the results to the wider benefit of society. The value of data as a foundation for wellbeing and a sustainable society must be appreciated; national resources must be more wisely directed to the collection, curation, discovery, widening access, analysis and exploitation of these data. Every researcher must be able to draw on skills, tools and computational resources to develop insights, test hypotheses and translate inventions into productive use, or to extract knowledge in support of governmental decision making. This foundation plus the skills developed will launch significant advances in research, in business, in professional practice and in government with many consequent benefits for UK citizens. The Strategy presented here addresses these complex and interlocking requirements.
Resumo:
Construction materials and equipment are essential building blocks of every construction project and may account for 50-60 per cent of the total cost of construction. The rate of their utilization, on the other hand, is the element that most directly relates to a project progress. A growing concern in the industry that inadequate efficiency hinders its success could thus be accommodated by turning construction into a logistic process. Although mostly limited, recent attempts and studies show that Radio Frequency IDentification (RFID) applications have significant potentials in construction. However, the aim of this research is to show that the technology itself should not only be used for automation and tracking to overcome the supply chain complexity but also as a tool to generate, record and exchange process-related knowledge among the supply chain stakeholders. This would enable all involved parties to identify and understand consequences of any forthcoming difficulties and react accordingly before they cause major disruptions in the construction process. In order to achieve this aim the study focuses on a number of methods. First of all it develops a generic understanding of how RFID technology has been used in logistic processes in industrial supply chain management. Secondly, it investigates recent applications of RFID as an information and communication technology support facility in construction logistics for the management of construction supply chain. Based on these the study develops an improved concept of a construction logistics architecture that explicitly relies on integrating RFID with the Global Positioning System (GPS). The developed conceptual model architecture shows that categorisation provided through RFID and traceability as a result of RFID/GPS integration could be used as a tool to identify, record and share potential problems and thus vastly improve knowledge management processes within the entire supply chain. The findings thus clearly show a need for future research in this area.
Resumo:
Purpose – The purpose of this paper is to investigate gym and non-gym users' use and understanding of nutrition labels. Design/methodology/approach – A consumer survey in the form of a questionnaire conducted in the Greater London area in February/March 2005. Subject recruitment process took place in both a gym and university setting. Frequency tables and ?2-test are used to assess relationships between variables (p=0.05). Findings – The resulting sample consisted of 187 subjects, with predominance of females and gym users. Of the subjects, 88 per cent reported to at least occasionally read nutrition labels, with higher reading rates amongst women, irrespective of gym user status. Total and saturated fats are the most often information viewed on labels, however the overall knowledge of the calorie content of fat is low, with 53 per cent of subjects responding saturated fat contains more calories per gram when compared with other types of fats. This paper does not find significant differences in the use and understanding of nutrition labels between gym and non-gym users, but highlights the publics' continued lack of understanding of nutrition labels. Originality/value – This paper is unique as it investigates whether there is any difference between gym/non-gym users' use and interpretation of use of nutrition labels. It finds gender impacted more on nutritional labels knowledge than gym user's status. This points to a gender issue and questions the quality of information available to the general public. This paper is valuable as it highlights and identifies an area that requires further research and assessment, and is therefore useful to key stakeholders responsible for public health nutrition.
Resumo:
The EU Project AquaTerra generates knowledge about the river-soil-sediment-groundwater system and delivers scientific information of value for river basin management. In this article, the use and ignorance of scientific knowledge in decision making is explored by a theoretical review. We elaborate on the 'two-communities theory', which explains the problems of the policy-science interface by relating and comparing the different cultures, contexts, and languages of researchers and policy makers. Within AquaTerra, the EUPOL subproject examines the policy-science interface with the aim of achieving a good connection between the scientific output of the project and EU policies. We have found two major barriers, namely language and resources, as well as two types of relevant relationships: those between different research communities and those between researchers and policy makers. (c) 2007 Elsevier Ltd. All rights reserved.