944 resultados para Modern technology
Resumo:
Contemporary food production, given the degree of technology being applied in it and the present state of scientific knowledge, should be able to feed the world. Corresponding statistics show that in fact the volumes of modern food production confirm this statement. Yet, the present nutritional situation across the globe leaves much to be desired: on the one hand the numbers of undernourished and malnourished people are still high and even growing in some regions, and on the other hand there is an increasing number of overweight and obese people who are experiencing (or are at risk of) adverse health impacts as consequences. The question arises how this situation is possible given the present state of food production and knowledge, and also in terms of nutrition basics when talking about the latter. When arguing about the main causes of the present situation with nutrition across the globe, it is the modern food system with its distortions that is often criticised with emphasis placed on inappropriate food distribution as one of the key problems. However it is not only food distribution that shapes inequalities in terms of food availability and accessibility – there is a number of other factors contributing to this situation including political influences. Each of the drivers of the present situation might affect more than one part and have outcomes in different dimensions. Therefore it makes sense to apply a holistic approach when viewing the modern food system, embracing all the elements and existing relationships between them for this will facilitate taking appropriate actions in order to target the desired outcome in the best possible way. Applying a systematic approach and linking various elements with corresponding interactions among them allows for picturing all the possible outcomes and hence finding the way for a better solution on global level – a solution to the present problem with nutritional disbalance across the globe.
Resumo:
The following contribution pretends to cope with the demands of a globalised, post-modern environment through the design and implementation of an online international project where an SNS is used in order to join English as Second Language (ESL) students from different parts of the world. The design of the project appears around the implementation of the Bologna process in the Faculty of Education from the University of Girona where the basic prerequisite of all students to acquire English at the level B1 of the Common European Portfolio makes English a compulsory competence for communication among its higher education candidates in order to develop in the world. Together with the University of Girona, there is the International Educational and Resources Network (iEARN) which promotes the participation of schools around the world in online international projects
Resumo:
Background Access to, and the use of, information and communication technology (ICT) is increasingly becoming a vital component of mainstream life. First-order (e.g. time and money) and second-order factors (e.g. beliefs of staff members) affect the use of ICT in different contexts. It is timely to investigate what these factors may be in the context of service provision for adults with intellectual disabilities given the role ICT could play in facilitating communication and access to information and opportunities as suggested in Valuing People. Method Taking a qualitative approach, nine day service sites within one organization were visited over a period of 6 months to observe ICT-related practice and seek the views of staff members working with adults with intellectual disabilities. All day services were equipped with modern ICT equipment including computers, digital cameras, Internet connections and related peripherals. Results Staff members reported time, training and budget as significant first-order factors. Organizational culture and beliefs about the suitability of technology for older or less able service users were the striking second-order factors mentioned. Despite similar levels of equipment, support and training, ICT use had developed in very different ways across sites. Conclusion The provision of ICT equipment and training is not sufficient to ensure their use; the beliefs of staff members and organizational culture of sites play a substantial role in how ICT is used with and by service users. Activity theory provides a useful framework for considering how first- and second-order factors are related. Staff members need to be given clear information about the broader purpose of activities in day services, especially in relation to the lifelong learning agenda, in order to see the relevance and usefulness of ICT resources for all service users.
Resumo:
The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The architecture of the new system uses Java language as programming environment. Since application parameters and hardware in a joint experiment are complex with a large variability of components, requirements and specification solutions need to be flexible and modular, independent from operating system and computer architecture. To describe and organize the information on all the components and the connections among them, systems are developed using the extensible Markup Language (XML) technology. The communication between clients and servers uses remote procedure call (RPC) based on the XML (RPC-XML technology). The integration among Java language, XML and RPC-XML technologies allows to develop easily a standard data and communication access layer between users and laboratories using common software libraries and Web application. The libraries allow data retrieval using the same methods for all user laboratories in the joint collaboration, and the Web application allows a simple graphical user interface (GUI) access. The TCABR tokamak team in collaboration with the IPFN (Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa) is implementing this remote participation technologies. The first version was tested at the Joint Experiment on TCABR (TCABRJE), a Host Laboratory Experiment, organized in cooperation with the IAEA (International Atomic Energy Agency) in the framework of the IAEA Coordinated Research Project (CRP) on ""Joint Research Using Small Tokamaks"". (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
German-Austrian Robert Musil (1880-1942) is considered an artist with an extremely unorthodox conception of art to a basic human problem. In his time, there existed a dissociation of substance from social values. Musil actually started with this foundation in considering the taunting dilemma that the accelerating technology of the century is overstepping each day the ability of the human mind to adjust to it. Musil maintained that social organization, patterns of thought and cherished ideals correspond to a reality that no longer exists
Resumo:
Este tese analise as implicações dos investimentos em tecnologia de informação e comunicação (ICT) em países ainda em desenvolvimento, especialmente em termos de educação, para estimular a implementação de uma infra-estrutura mais moderna em vez da continuação do uso de métodos tradicionais. Hoje, como o interesse e os investimentos em ICT estão crescendo rapidamente, os módulos e as idéias que existem para medir o estado de ICT são velhos e inexatos, e não podem ser aplicados às culturas de países em desenvolvimento. Políticos e investidores têm que considerar estes problemas quando estão pensando em investimentos ou socorros para programas em ICT no futuro, e investigadores e professores precisam entender os fatores importantes no desenvolvimento para os ICTs e a educação antes de começar estudos nestes países. Este tese conclue que investimentos em tecnologias móbeis e sem fios ajudarem organizações e governos ultrapassar a infra-estrutura tradicional, estreitando a divisão digital e dando o resulto de educação melhor, alfabetização maior, e soluções sustentáveis pelo desenvolvimento nas comunidades pobres no mundo de países em desenvolvimento.
Resumo:
This research aimed to find out which are the main factors that lead technology startups to fail. The study focused on companies located in the Southeast region of Brazil that operated between 2009 and 2014. In the beginning, a review of the literature was done to have a better understanding of basic concepts of entrepreneurship as well as modern techniques for developing entrepreneurship. Furthermore, an analysis of the entrepreneurial scenario in Brazil, with a focus on the Southeast, was also done. After this phase, the qualitative study began, in which 24 specialists from startups were interviewed and asked about which factors were crucial in leading a technology startup to fail. After analyzing the results, four main factors were identified and these factors were validated through a quantitative survey. A questionnaire was then formulated based on the answers from the respondents and distributed to founders and executives of startups, which both failed and succeeded. The questionnaire was answered by 56 companies and their answers were treated with the factor analysis statistical method to check the validity of the questionnaire. Finally, the logistical regression method was used to know the extent to which the factors led to the startups’ failure. In the end, the results obtained suggest that the most significant factor that leads technology startups in southeastern Brazil to fail are problems with interpersonal relationship between partners or investors.
Resumo:
Na moderna Economia do Conhecimento, na Era do Big Data, entender corretamente o uso e a gestão da Tecnologia de Informação e Comunicação (TIC) tendo como base o campo acadêmico de estudos de Sistemas de Informação (SI), torna-se cada vez mais relevante e estratégico para as organizações que pretendem: permanecer em atividade, estar aptas para atender novas demandas (internas e externas) e enfrentar as complexas mudanças na competição de mercado. Esta pesquisa utiliza a teoria dos estágios de crescimento, fundamentada pelos estudos de Richard L. Nolan nos anos 70. A literatura acadêmica relacionada com modelos de estágios de crescimento e o contexto do campo de estudo de SI, fornecem as bases conceituais deste estudo. A pesquisa identifica um modelo com seus construtos relacionados aos estágios de crescimento das iniciativas da TIC/SI organizacional, partindo das variáveis de benchmark de segundo nível de Nolan, e propõe sua operacionalização com a criação e desenvolvimento de uma escala. De caráter exploratório e descritivo, a pesquisa traz contribuição teórica ao paradigma da teoria dos estágios de crescimento, adicionando um novo processo de crescimento em sua estrutura conceitual. Como resultado, é disponibilizado além de um instrumento de escala bilíngue (português e inglês), recomendações e regras para aplicação de um instrumento de pesquisa do tipo survey, na continuidade deste estudo. Como implicação geral desta pesquisa, é esperado que seu uso e aplicação ao mensurar a avaliação do nível de estágio da TIC/SI em organizações, possam auxiliar dois perfis de indivíduos: acadêmicos que estudam essa temática, assim como, profissionais que buscam respostas de suas ações práticas nas organizações onde trabalham.
Resumo:
Modern agriculture demands investments in technology that allows the farmers to improve productivity and quality of their products, aiming to establish themselves in a competitive market. However, the high costs of acquiring and maintaining such technology may be an inhibiting factor to its spread and acceptance, mainly to a large number of small grain Brazilian farmers, who need low cost innovative technological solutions, suitable for their financial reality. Starting from this premise, this paper presents the development of a low cost prototype for monitoring the temperature and humidity of grains stored in silos, and the economic implications of cost/benefit ratio of innovative applications of low cost technology in the process of thermometry of grains. The prototype was made of two electronic units, one for acquisition and another one for data reception, as well as software, which offered the farmers more precise information for the control of aeration. The data communication between the electronic units and the software was reliable and both were developed using low cost electronic components and free software tools. The developed system was considered as potentially viable to small grain Brazilian farmers; it can be used in any type of small silos. It provided reduction of costs of installation and maintenance and also offered an easy expansion system; besides the low cost of development when compared to similar products available in the Brazilian market.
Resumo:
Fieldbus communications networks are a fundamental part of modern industrial automation technique. This paperwork presents an application of project-based learning (PBL) paradigm to help electrical engineering students grasp the major concepts of fieldbus networks, while attending a one-term long, elective microcontroller course. © 2012 IEEE.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
Ancient pavements are composed of a variety of preparatory or foundation layers constituting the substrate, and of a layer of tesserae, pebbles or marble slabs forming the surface of the floor. In other cases, the surface consists of a mortar layer beaten and polished. The term mosaic is associated with the presence of tesserae or pebbles, while the more general term pavement is used in all the cases. As past and modern excavations of ancient pavements demonstrated, all pavements do not necessarily display the stratigraphy of the substrate described in the ancient literary sources. In fact, the number and thickness of the preparatory layers, as well as the nature and the properties of their constituent materials, are often varying in pavements which are placed either in different sites or in different buildings within a same site or even in a same building. For such a reason, an investigation that takes account of the whole structure of the pavement is important when studying the archaeological context of the site where it is placed, when designing materials to be used for its maintenance and restoration, when documenting it and when presenting it to public. Five case studies represented by archaeological sites containing floor mosaics and other kind of pavements, dated to the Hellenistic and the Roman period, have been investigated by means of in situ and laboratory analyses. The results indicated that the characteristics of the studied pavements, namely the number and the thickness of the preparatory layers, and the properties of the mortars constituting them, vary according to the ancient use of the room where the pavements are placed and to the type of surface upon which they were built. The study contributed to the understanding of the function and the technology of the pavementsâ substrate and to the characterization of its constituent materials. Furthermore, the research underlined the importance of the investigation of the whole structure of the pavement, included the foundation surface, in the interpretation of the archaeological context where it is located. A series of practical applications of the results of the research, in the designing of repair mortars for pavements, in the documentation of ancient pavements in the conservation practice, and in the presentation to public in situ and in museums of ancient pavements, have been suggested.
Resumo:
Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.