945 resultados para Information-Analytical System
Resumo:
The particular characteristics and affordances of technologies play a significant role in human experience by defining the realm of possibilities available to individuals and societies. Some technological configurations, such as the Internet, facilitate peer-to-peer communication and participatory behaviors. Others, like television broadcasting, tend to encourage centralization of creative processes and unidirectional communication. In other instances still, the affordances of technologies can be further constrained by social practices. That is the case, for example, of radio which, although technically allowing peer-to-peer communication, has effectively been converted into a broadcast medium through the legislation of the airwaves. How technologies acquire particular properties, meanings and uses, and who is involved in those decisions are the broader questions explored here. Although a long line of thought maintains that technologies evolve according to the logic of scientific rationality, recent studies demonstrated that technologies are, in fact, primarily shaped by social forces in specific historical contexts. In this view, adopted here, there is no one best way to design a technological artifact or system; the selection between alternative designs—which determine the affordances of each technology—is made by social actors according to their particular values, assumptions and goals. Thus, the arrangement of technical elements in any technological artifact is configured to conform to the views and interests of those involved in its development. Understanding how technologies assume particular shapes, who is involved in these decisions and how, in turn, they propitiate particular behaviors and modes of organization but not others, requires understanding the contexts in which they are developed. It is argued here that, throughout the last century, two distinct approaches to the development and dissemination of technologies have coexisted. In each of these models, based on fundamentally different ethoi, technologies are developed through different processes and by different participants—and therefore tend to assume different shapes and offer different possibilities. In the first of these approaches, the dominant model in Western societies, technologies are typically developed by firms, manufactured in large factories, and subsequently disseminated to the rest of the population for consumption. In this centralized model, the role of users is limited to selecting from the alternatives presented by professional producers. Thus, according to this approach, the technologies that are now so deeply woven into human experience, are primarily shaped by a relatively small number of producers. In recent years, however, a group of three interconnected interest groups—the makers, hackerspaces, and open source hardware communities—have increasingly challenged this dominant model by enacting an alternative approach in which technologies are both individually transformed and collectively shaped. Through a in-depth analysis of these phenomena, their practices and ethos, it is argued here that the distributed approach practiced by these communities offers a practical path towards a democratization of the technosphere by: 1) demystifying technologies, 2) providing the public with the tools and knowledge necessary to understand and shape technologies, and 3) encouraging citizen participation in the development of technologies.
Resumo:
In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.
Resumo:
In the last few years, we have observed an exponential increasing of the information systems, and parking information is one more example of them. The needs of obtaining reliable and updated information of parking slots availability are very important in the goal of traffic reduction. Also parking slot prediction is a new topic that has already started to be applied. San Francisco in America and Santander in Spain are examples of such projects carried out to obtain this kind of information. The aim of this thesis is the study and evaluation of methodologies for parking slot prediction and the integration in a web application, where all kind of users will be able to know the current parking status and also future status according to parking model predictions. The source of the data is ancillary in this work but it needs to be understood anyway to understand the parking behaviour. Actually, there are many modelling techniques used for this purpose such as time series analysis, decision trees, neural networks and clustering. In this work, the author explains the best techniques at this work, analyzes the result and points out the advantages and disadvantages of each one. The model will learn the periodic and seasonal patterns of the parking status behaviour, and with this knowledge it can predict future status values given a date. The data used comes from the Smart Park Ontinyent and it is about parking occupancy status together with timestamps and it is stored in a database. After data acquisition, data analysis and pre-processing was needed for model implementations. The first test done was with the boosting ensemble classifier, employed over a set of decision trees, created with C5.0 algorithm from a set of training samples, to assign a prediction value to each object. In addition to the predictions, this work has got measurements error that indicates the reliability of the outcome predictions being correct. The second test was done using the function fitting seasonal exponential smoothing tbats model. Finally as the last test, it has been tried a model that is actually a combination of the previous two models, just to see the result of this combination. The results were quite good for all of them, having error averages of 6.2, 6.6 and 5.4 in vacancies predictions for the three models respectively. This means from a parking of 47 places a 10% average error in parking slot predictions. This result could be even better with longer data available. In order to make this kind of information visible and reachable from everyone having a device with internet connection, a web application was made for this purpose. Beside the data displaying, this application also offers different functions to improve the task of searching for parking. The new functions, apart from parking prediction, were: - Park distances from user location. It provides all the distances to user current location to the different parks in the city. - Geocoding. The service for matching a literal description or an address to a concrete location. - Geolocation. The service for positioning the user. - Parking list panel. This is not a service neither a function, is just a better visualization and better handling of the information.
Resumo:
RESUMO - O sistema de saúde é constantemente sujeito a pressões sendo as mais relevantes a pressão para o aumento da qualidade e a necessidade de contenção de custos. Os Eventos Adversos (EAs) ocorridos em meio hospitalar constituem um sério problema de qualidade na prestação de cuidados de saúde, com consequências clinicas, sociais, económicas e de imagem, que afectam pacientes, profissionais, organizações e o próprio sistema de saúde. Os custos associados à ocorrência de EAs em meio hospitalar, incrementam significativamente os custos hospitalares, representando cerca de um em cada sete dólares gastos no atendimento dos doentes. Só na última década surgiram estudos com o objectivo principal de avaliar esse impacto em meio hospitalar, subsistindo ainda uma grande indefinição quanto às variáveis e métodos a utilizar. O objectivo principal deste trabalho de projecto foi conhecer e caracterizar as diferentes metodologias utilizadas para avaliação dos custos económicos, nomeadamente dos custos directos, relacionados com a ocorrência de eventos adversos em meio hospitalar. Tendo em atenção as dificuldades referidas, utilizou-se como metodologia a revisão narrativa da literatura, complementada com a realização de uma técnica de grupo nominal. Os resultados obtidos foram os seguintes: i) a metodologia utilizada na maioria dos estudos para determinar a frequência, natureza e consequências dos EAs ocorridos em meio hospitalar, utiliza matrizes de base observacional, analítica, com base em estudos de coorte retrospectivo recorrendo aos critérios definidos pelo Harvard Medical Practice Study; ii) a generalidade dos estudos realizados avaliam os custos directos dos EAs em meio hospitalar, iii) verificou-se a existência de uma grande diversidade de métodos para a determinação dos custos associados aos EAs. A generalidade dos estudos determina esse valor com base na contabilização do número de dias adicionais de internamento, resultantes do EA, valorizados com base em custos médios; iv) o grupo de peritos, propôs como metodologia para a determinação do custo associado a cada EA, a utilização de sistemas de custeio por doente; v) propõe-se o desenvolvimento de uma plataforma informática, que permita o cruzamento da informação disponível no registo clinico electrónico do doente com um sistema automático de identificação de EAs, a desenvolver, e com sistemas de custeio por doente, de modo a valorizar os custos por doente e por tipo de EA. A avaliação dos custos directos associados à ocorrência de EAs em contexto hospitalar, pelo impacto económico e social que tem nos doentes e organizações, será seguramente uma das áreas de estudo e investigação futuras, no sentido de melhorar a eficiência do sistema de saúde e a qualidade e segurança dos cuidados prestados aos doentes.
Resumo:
This dissertation aims to guarantee the integration of a mobile autonomous robot equipped with many sensors in a multi-agent distributed and georeferenced surveillance system. The integration of a mobile autonomous robot in this system leads to new features that will be available to clients of surveillance system may use. These features may be of two types: using the robot as an agent that will act in the environment or by using the robot as a mobile set of sensors. As an agent in the system, the robot can move to certain locations when alerts are received, in order to acknowledge the underlying events or take to action in order to assist in resolving this event. As a sensor platform in the system, it is possible to access information that is read from the sensors of the robot and access complementary measurements to the ones taken by other sensors in the multi-agent system. To integrate this mobile robot in an effective way it is necessary to extend the current multi-agent system architecture to make the connection between the two systems and to integrate the functionalities provided by the robot into the multi-agent system.
Resumo:
Nowadays, authentication studies for paintings require a multidisciplinary approach, based on the contribution of visual features analysis but also on characterizations of materials and techniques. Moreover, it is important that the assessment of the authorship of a painting is supported by technical studies of a selected number of original artworks that cover the entire career of an artist. This dissertation is concerned about the work of modernist painter Amadeo de Souza-Cardoso. It is divided in three parts. In the first part, we propose a tool based on image processing that combines information obtained by brushstroke and materials analysis. The resulting tool provides qualitative and quantitative evaluation of the authorship of the paintings; the quantitative element is particularly relevant, as it could be crucial in solving authorship controversies, such as judicial disputes. The brushstroke analysis was performed by combining two algorithms for feature detection, namely Gabor filter and Scale Invariant Feature Transform. Thanks to this combination (and to the use of the Bag-of-Features model), the proposed method shows an accuracy higher than 90% in distinguishing between images of Amadeo’s paintings and images of artworks by other contemporary artists. For the molecular analysis, we implemented a semi-automatic system that uses hyperspectral imaging and elemental analysis. The system provides as output an image that depicts the mapping of the pigments present, together with the areas made using materials not coherent with Amadeo’s palette, if any. This visual output is a simple and effective way of assessing the results of the system. The tool proposed based on the combination of brushstroke and molecular information was tested in twelve paintings obtaining promising results. The second part of the thesis presents a systematic study of four selected paintings made by Amadeo in 1917. Although untitled, three of these paintings are commonly known as BRUT, Entrada and Coty; they are considered as his most successful and genuine works. The materials and techniques of these artworks have never been studied before. The paintings were studied with a multi-analytical approach using micro-Energy Dispersive X-ray Fluorescence spectroscopy, micro-Infrared and Raman Spectroscopy, micro-Spectrofluorimetry and Scanning Electron Microscopy. The characterization of Amadeo’s materials and techniques used on his last paintings, as well as the investigation of some of the conservation problems that affect these paintings, is essential to enrich the knowledge on this artist. Moreover, the study of the materials in the four paintings reveals commonalities between the paintings BRUT and Entrada. This observation is supported also by the analysis of the elements present in a photograph of a collage (conserved at the Art Library of the Calouste Gulbenkian Foundation), the only remaining evidence of a supposed maquete of these paintings. The final part of the thesis describes the application of the image processing tools developed in the first part of the thesis on a set of case studies; this experience demonstrates the potential of the tool to support painting analysis and authentication studies. The brushstroke analysis was used as additional analysis on the evaluation process of four paintings attributed to Amadeo, and the system based on hyperspectral analysis was applied on the painting dated 1917. The case studies therefore serve as a bridge between the first two parts of the dissertation.
Resumo:
In any business it is very important to measure the performance and it helps to select key information to make better decisions on time. This research focuses on the design and implementation of a performance measurement system in a Portuguese medium size firm operating in the specialized health care transformation vehicles industry. From the evidence that outputs from Auto Ribeiro’s current information system is misaligned with the company’s objectives and strategy, this research tries to solve this business problem through the development of a Balanced Scorecard analysis, although there are some issues, which deserve further development.
Resumo:
Introduction The early diagnosis of mycobacterial infections is a critical step for initiating treatment and curing the patient. Molecular analytical methods have led to considerable improvements in the speed and accuracy of mycobacteria detection. Methods The purpose of this study was to evaluate a multiplex polymerase chain reaction system using mycobacterial strains as an auxiliary tool in the differential diagnosis of tuberculosis and diseases caused by nontuberculous mycobacteria (NTM) Results Forty mycobacterial strains isolated from pulmonary and extrapulmonary origin specimens from 37 patients diagnosed with tuberculosis were processed. Using phenotypic and biochemical characteristics of the 40 mycobacteria isolated in LJ medium, 57.5% (n=23) were characterized as the Mycobacterium tuberculosis complex (MTBC) and 20% (n=8) as nontuberculous mycobacteria (NTM), with 22.5% (n=9) of the results being inconclusive. When the results of the phenotypic and biochemical tests in 30 strains of mycobacteria were compared with the results of the multiplex PCR, there was 100% concordance in the identification of the MTBC and NTM species, respectively. A total of 32.5% (n=13) of the samples in multiplex PCR exhibited a molecular pattern consistent with NTM, thus disagreeing with the final diagnosis from the attending physician. Conclusions Multiplex PCR can be used as a differential method for determining TB infections caused by NTM a valuable tool in reducing the time necessary to make clinical diagnoses and begin treatment. It is also useful for identifying species that were previously not identifiable using conventional biochemical and phenotypic techniques.
Resumo:
The need for more efficient illumination systems has led to the proliferation of Solid-State Lighting (SSL) systems, which offer optimized power consumption. SSL systems are comprised of LED devices which are intrinsically fast devices and permit very fast light modulation. This, along with the congestion of the radio frequency spectrum has paved the path for the emergence of Visible Light Communication (VLC) systems. VLC uses free space to convey information by using light modulation. Notwithstanding, as VLC systems proliferate and cost competitiveness ensues, there are two important aspects to be considered. State-of-the-art VLC implementations use power demanding PAs, and thus it is important to investigate if regular, existent Switched-Mode Power Supply (SMPS) circuits can be adapted for VLC use. A 28 W buck regulator was implemented using a off-the-shelf LED Driver integrated circuit, using both series and parallel dimming techniques. Results show that optical clock frequencies up to 500 kHz are achievable without any major modification besides adequate component sizing. The use of an LED as a sensor was investigated, in a short-range, low-data-rate perspective. Results show successful communication in an LED-to-LED configuration, with enhanced range when using LED strings as sensors. Besides, LEDs present spectral selective sensitivity, which makes them good contenders for a multi-colour LED-to-LED system, such as in the use of RGB displays and lamps. Ultimately, the present work shows evidence that LEDs can be used as a dual-purpose device, enabling not only illumination, but also bi-directional data communication.
Resumo:
Currently the world swiftly adapts to visual communication. Online services like YouTube and Vine show that video is no longer the domain of broadcast television only. Video is used for different purposes like entertainment, information, education or communication. The rapid growth of today’s video archives with sparsely available editorial data creates a big problem of its retrieval. The humans see a video like a complex interplay of cognitive concepts. As a result there is a need to build a bridge between numeric values and semantic concepts. This establishes a connection that will facilitate videos’ retrieval by humans. The critical aspect of this bridge is video annotation. The process could be done manually or automatically. Manual annotation is very tedious, subjective and expensive. Therefore automatic annotation is being actively studied. In this thesis we focus on the multimedia content automatic annotation. Namely the use of analysis techniques for information retrieval allowing to automatically extract metadata from video in a videomail system. Furthermore the identification of text, people, actions, spaces, objects, including animals and plants. Hence it will be possible to align multimedia content with the text presented in the email message and the creation of applications for semantic video database indexing and retrieving.
Resumo:
Fado was listed as UNESCO Intangible Cultural Heritage in 2011. This dissertation describes a theoretical model, as well as an automatic system, able to generate instrumental music based on the musics and vocal sounds typically associated with fado’s practice. A description of the phenomenon of fado, its musics and vocal sounds, based on ethnographic, historical sources and empirical data is presented. The data includes the creation of a digital corpus, of musical transcriptions, identified as fado, and statistical analysis via music information retrieval techniques. The second part consists in the formulation of a theory and the coding of a symbolic model, as a proof of concept, for the automatic generation of instrumental music based on the one in the corpus.
Resumo:
Geochemical and geochronological analyses of samples of surficial Acre Basin sediments and fossils indicate an extensive fluvial-lacustrine system, occupying this region, desiccated slowly during the last glacial cycle (LGC). This research documents direct evidence for aridity in western Amazonia during the LGC and is important in establishing boundary conditions for LGC climate models as well as in correlating marine and continental (LGC) climate conditions.
Resumo:
Urban mobility is one of the main challenges facing urban areas due to the growing population and to traffic congestion, resulting in environmental pressures. The pathway to urban sustainable mobility involves strengthening of intermodal mobility. The integrated use of different transport modes is getting more and more important and intermodality has been mentioned as a way for public transport compete with private cars. The aim of the current dissertation is to define a set of strategies to improve urban mobility in Lisbon and by consequence reduce the environmental impacts of transports. In order to do that several intermodal practices over Europe were analysed and the transport systems of Brussels and Lisbon were studied and compared, giving special attention to intermodal systems. In the case study was gathered data from both cities in the field, by using and observing the different transport modes, and two surveys were done to the cities users. As concluded by the study, Brussels and Lisbon present significant differences. In Brussels the measures to promote intermodality are evident, while in Lisbon a lot still needs to be done. It also made clear the necessity for improvements in Lisbon’s public transports to a more intermodal passenger transport system, through integration of different transport modes and better information and ticketing system. Some of the points requiring developments are: interchanges’ waiting areas; integration of bicycle in public transport; information about correspondences with other transport modes; real-time information to passengers pre-trip and on-trip, especially in buses and trams. After the identification of the best practices in Brussels and the weaknesses in Lisbon the possibility of applying some of the practices in Brussels to Lisbon was evaluated. Brussels demonstrated to be a good example of intermodality and for that reason some of the recommendations to improve intermodal mobility in Lisbon can follow the practices in place in Brussels.
Resumo:
The objective of this paper is to propose a simplified analytical approach to predict the flexural behavior of simply supported reinforced-concrete (RC) beams flexurally strengthened with prestressed carbon fiber reinforced polymer (CFRP) reinforcements using either externally bonded reinforcing (EBR) or near surface mounted (NSM) techniques. This design methodology also considers the ultimate flexural capacity of NSM CFRP strengthened beams when concrete cover delamination is the governing failure mode. A moment–curvature (M–χ) relationship formed by three linear branches corresponding to the precracking, postcracking, and postyielding stages is established by considering the four critical M–χ points that characterize the flexural behavior of CFRP strengthened beams. Two additional M–χ points, namely, concrete decompression and steel decompression, are also defined to assess the initial effects of the prestress force applied by the FRP reinforcement. The mid-span deflection of the beams is predicted based on the curvature approach, assuming a linear curvature variation between the critical points along the beam length. The good predictive performance of the analytical model is appraised by simulating the force–deflection response registered in experimental programs composed of RC beams strengthened with prestressed NSM CFRP reinforcements.
Resumo:
Hybrid Composite Plate (HCP) is a reliable recently proposed retrofitting solution for concrete structures, which is composed of a strain hardening cementitious composite (SHCC) plate reinforced with Carbon Fibre Reinforced Polymer (CFRP). This system benefits from the synergetic advantages of these two composites, namely the high ductility of SHCC and the high tensile strength of CFRPs. In the materialstructural of HCP, the ultra-ductile SHCC plate acts as a suitable medium for stress transfer between CFRP laminates (bonded into the pre-sawn grooves executed on the SHCC plate) and the concrete substrate by means of a connection system made by either chemical anchors, adhesive, or a combination thereof. In comparison with traditional applications of FRP systems, HCP is a retrofitting solution that (i) is less susceptible to the detrimental effect of the lack of strength and soundness of the concrete cover in the strengthening effectiveness; (ii) assures higher durability for the strengthened elements and higher protection to the FRP component in terms of high temperatures and vandalism; and (iii) delays, or even, prevents detachment of concrete substrate. This paper describes the experimental program carried out, and presents and discusses the relevant results obtained on the assessment of the performance of HCP strengthened reinforced concrete (RC) beams subjected to flexural loading. Moreover, an analytical approach to estimate the ultimate flexural capacity of these beams is presented, which was complemented with a numerical strategy for predicting their load-deflection behaviour. By attaching HCP to the beams’ soffit, a significant increase in the flexural capacity at service, at yield initiation of the tension steel bars and at failure of the beams can be achieved, while satisfactory deflection ductility is assured and a high tensile capacity of the CFRP laminates is mobilized. Both analytical and numerical approaches have predicted with satisfactory agreement, the load-deflection response of the reference beam and the strengthened ones tested experimentally.