852 resultados para information networks
Resumo:
Pectus excavatum is the most common deformity of the thorax. Pre-operative diagnosis usually includes Computed Tomography (CT) to successfully employ a thoracic prosthesis for anterior chest wall remodeling. Aiming at the elimination of radiation exposure, this paper presents a novel methodology for the replacement of CT by a 3D laser scanner (radiation-free) for prosthesis modeling. The complete elimination of CT is based on an accurate determination of ribs position and prosthesis placement region through skin surface points. The developed solution resorts to a normalized and combined outcome of an artificial neural network (ANN) set. Each ANN model was trained with data vectors from 165 male patients and using soft tissue thicknesses (STT) comprising information from the skin and rib cage (automatically determined by image processing algorithms). Tests revealed that ribs position for prosthesis placement and modeling can be estimated with an average error of 5.0 ± 3.6 mm. One also showed that the ANN performance can be improved by introducing a manually determined initial STT value in the ANN normalization procedure (average error of 2.82 ± 0.76 mm). Such error range is well below current prosthesis manual modeling (approximately 11 mm), which can provide a valuable and radiation-free procedure for prosthesis personalization.
Resumo:
Pectus excavatum is the most common deformity of the thorax. Pre-operative diagnosis usually includes Computed Tomography (CT) to successfully employ a thoracic prosthesis for anterior chest wall remodeling. Aiming at the elimination of radiation exposure, this paper presents a novel methodology for the replacement of CT by a 3D laser scanner (radiation-free) for prosthesis modeling. The complete elimination of CT is based on an accurate determination of ribs position and prosthesis placement region through skin surface points. The developed solution resorts to a normalized and combined outcome of an artificial neural network (ANN) set. Each ANN model was trained with data vectors from 165 male patients and using soft tissue thicknesses (STT) comprising information from the skin and rib cage (automatically determined by image processing algorithms). Tests revealed that ribs position for prosthesis placement and modeling can be estimated with an average error of 5.0 ± 3.6 mm. One also showed that the ANN performance can be improved by introducing a manually determined initial STT value in the ANN normalization procedure (average error of 2.82 ± 0.76 mm). Such error range is well below current prosthesis manual modeling (approximately 11 mm), which can provide a valuable and radiation-free procedure for prosthesis personalization.
Resumo:
Pectus excavatum is the most common deformity of the thorax and usually comprises Computed Tomography (CT) examination for pre-operative diagnosis. Aiming at the elimination of the high amounts of CT radiation exposure, this work presents a new methodology for the replacement of CT by a laser scanner (radiation-free) in the treatment of pectus excavatum using personally modeled prosthesis. The complete elimination of CT involves the determination of ribs external outline, at the maximum sternum depression point for prosthesis placement, based on chest wall skin surface information, acquired by a laser scanner. The developed solution resorts to artificial neural networks trained with data vectors from 165 patients. Scaled Conjugate Gradient, Levenberg-Marquardt, Resilient Back propagation and One Step Secant gradient learning algorithms were used. The training procedure was performed using the soft tissue thicknesses, determined using image processing techniques that automatically segment the skin and rib cage. The developed solution was then used to determine the ribs outline in data from 20 patient scanners. Tests revealed that ribs position can be estimated with an average error of about 6.82±5.7 mm for the left and right side of the patient. Such an error range is well below current prosthesis manual modeling (11.7±4.01 mm) even without CT imagiology, indicating a considerable step forward towards CT replacement by a 3D scanner for prosthesis personalization.
Resumo:
Recently, several distributed video coding (DVC) solutions based on the distributed source coding (DSC) paradigm have appeared in the literature. Wyner-Ziv (WZ) video coding, a particular case of DVC where side information is made available at the decoder, enable to achieve a flexible distribution of the computational complexity between the encoder and decoder, promising to fulfill novel requirements from applications such as video surveillance, sensor networks and mobile camera phones. The quality of the side information at the decoder has a critical role in determining the WZ video coding rate-distortion (RD) performance, notably to raise it to a level as close as possible to the RD performance of standard predictive video coding schemes. Towards this target, efficient motion search algorithms for powerful frame interpolation are much needed at the decoder. In this paper, the RD performance of a Wyner-Ziv video codec is improved by using novel, advanced motion compensated frame interpolation techniques to generate the side information. The development of these type of side information estimators is a difficult problem in WZ video coding, especially because the decoder only has available some reference, decoded frames. Based on the regularization of the motion field, novel side information creation techniques are proposed in this paper along with a new frame interpolation framework able to generate higher quality side information at the decoder. To illustrate the RD performance improvements, this novel side information creation framework has been integrated in a transform domain turbo coding based Wyner-Ziv video codec. Experimental results show that the novel side information creation solution leads to better RD performance than available state-of-the-art side information estimators, with improvements up to 2 dB: moreover, it allows outperforming H.264/AVC Intra by up to 3 dB with a lower encoding complexity.
Resumo:
Integrated manufacturing constitutes a complex system made of heterogeneous information and control subsystems. Those subsystems are not designed to the cooperation. Typically each subsystem automates specific processes, and establishes closed application domains, therefore it is very difficult to integrate it with other subsystems in order to respond to the needed process dynamics. Furthermore, to cope with ever growing marketcompetition and demands, it is necessary for manufacturing/enterprise systems to increase their responsiveness based on up-to-date knowledge and in-time data gathered from the diverse information and control systems. These have created new challenges for manufacturing sector, and even bigger challenges for collaborative manufacturing. The growing complexity of the information and communication technologies when coping with innovative business services based on collaborative contributions from multiple stakeholders, requires novel and multidisciplinary approaches. Service orientation is a strategic approach to deal with such complexity, and various stakeholders' information systems. Services or more precisely the autonomous computational agents implementing the services, provide an architectural pattern able to cope with the needs of integrated and distributed collaborative solutions. This paper proposes a service-oriented framework, aiming to support a virtual organizations breeding environment that is the basis for establishing short or long term goal-oriented virtual organizations. The notion of integrated business services, where customers receive some value developed through the contribution from a network of companies is a key element.
Resumo:
Many of the most common human functions such as temporal and non-monotonic reasoning have not yet been fully mapped in developed systems, even though some theoretical breakthroughs have already been accomplished. This is mainly due to the inherent computational complexity of the theoretical approaches. In the particular area of fault diagnosis in power systems however, some systems which tried to solve the problem, have been deployed using methodologies such as production rule based expert systems, neural networks, recognition of chronicles, fuzzy expert systems, etc. SPARSE (from the Portuguese acronym, which means expert system for incident analysis and restoration support) was one of the developed systems and, in the sequence of its development, came the need to cope with incomplete and/or incorrect information as well as the traditional problems for power systems fault diagnosis based on SCADA (supervisory control and data acquisition) information retrieval, namely real-time operation, huge amounts of information, etc. This paper presents an architecture for a decision support system, which can solve the presented problems, using a symbiosis of the event calculus and the default reasoning rule based system paradigms, insuring soft real-time operation with incomplete, incorrect or domain incoherent information handling ability. A prototype implementation of this system is already at work in the control centre of the Portuguese Transmission Network.
Resumo:
This work describes a methodology to extract symbolic rules from trained neural networks. In our approach, patterns on the network are codified using formulas on a Lukasiewicz logic. For this we take advantage of the fact that every connective in this multi-valued logic can be evaluated by a neuron in an artificial network having, by activation function the identity truncated to zero and one. This fact simplifies symbolic rule extraction and allows the easy injection of formulas into a network architecture. We trained this type of neural network using a back-propagation algorithm based on Levenderg-Marquardt algorithm, where in each learning iteration, we restricted the knowledge dissemination in the network structure. This makes the descriptive power of produced neural networks similar to the descriptive power of Lukasiewicz logic language, minimizing the information loss on the translation between connectionist and symbolic structures. To avoid redundance on the generated network, the method simplifies them in a pruning phase, using the "Optimal Brain Surgeon" algorithm. We tested this method on the task of finding the formula used on the generation of a given truth table. For real data tests, we selected the Mushrooms data set, available on the UCI Machine Learning Repository.
Resumo:
Collaborative Work plays an important role in today’s organizations, especially in areas where decisions must be made. However, any decision that involves a collective or group of decision makers is, by itself complex, but is becoming recurrent in recent years. In this work we present the VirtualECare project, an intelligent multi-agent system able to monitor, interact and serve its customers, which are, normally, in need of care services. In last year’s there has been a substantially increase on the number of people needed of intensive care, especially among the elderly, a phenomenon that is related to population ageing. However, this is becoming not exclusive of the elderly, as diseases like obesity, diabetes and blood pressure have been increasing among young adults. This is a new reality that needs to be dealt by the health sector, particularly by the public one. Given this scenarios, the importance of finding new and cost effective ways for health care delivery are of particular importance, especially when we believe they should not to be removed from their natural “habitat”. Following this line of thinking, the VirtualECare project will be presented, like similar ones that preceded it. Recently we have also assisted to a growing interest in combining the advances in information society - computing, telecommunications and presentation – in order to create Group Decision Support Systems (GDSS). Indeed, the new economy, along with increased competition in today’s complex business environments, takes the companies to seek complementarities in order to increase competitiveness and reduce risks. Under these scenarios, planning takes a major role in a company life. However, effective planning depends on the generation and analysis of ideas (innovative or not) and, as a result, the idea generation and management processes are crucial. Our objective is to apply the above presented GDSS to a new area. We believe that the use of GDSS in the healthcare arena will allow professionals to achieve better results in the analysis of one’s Electronically Clinical Profile (ECP). This achievement is vital, regarding the explosion of knowledge and skills, together with the need to use limited resources and get better results.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Advances in networking and information technologies are transforming factory-floor communication systems into a mainstream activity within industrial automation. It is now recognized that future industrial computer systems will be intimately tied to real-time computing and to communication technologies. For this vision to succeed, complex heterogeneous factory-floor communication networks (including mobile/wireless components) need to function in a predictable, flawless, efficient and interoperable way. In this paper we re-visit the issue of supporting real-time communications in hybrid wired/wireless fieldbus-based networks, bringing into it some experimental results obtained in the framework of the RFieldbus ISEP pilot.
Resumo:
The marriage of emerging information technologies with control technologies is a major driving force that, in the context of the factory-floor, is creating an enormous eagerness for extending the capabilities of currently available fieldbus networks to cover functionalities not considered up to a recent past. Providing wireless capabilities to such type of communication networks is a big share of that effort. The RFieldbus European project is just one example, where PROFIBUS was provided with suitable extensions for implementing hybrid wired/wireless communication systems. In RFieldbus, interoperability between wired and wireless components is achieved by the use specific intermediate networking systems operating as repeaters, thus creating a single logical ring (SLR) network. The main advantage of the SLR approach is that the effort for protocol extensions is not significant. However, a multiple logical ring (MLR) approach provides traffic and error isolation between different network segments. This concept was introduced in, where an approach for a bridge-based architecture was briefly outlined. This paper will focus on the details of the inter-Domain Protocol (IDP), which is responsible for handling transactions between different network domains (wired or wireless) running the PROFIBUS protocol.
Resumo:
Reliability of communications is key to expand application domains for sensor networks. SinceWireless Sensor Networks (WSN) operate in the license-free Industrial Scientific and Medical (ISM) bands and hence share the spectrum with other wireless technologies, addressing interference is an important challenge. In order to minimize its effect, nodes can dynamically adapt radio resources provided information about current spectrum usage is available. We present a new channel quality metric, based on availability of the channel over time, which meaningfully quantifies spectrum usage. We discuss the optimum scanning time for capturing the channel condition while maintaining energy-efficiency. Using data collected from a number of Wi-Fi networks operating in a library building, we show that our metric has strong correlation with the Packet Reception Rate (PRR). This suggests that quantifying interference in the channel can help in adapting resources for better reliability. We present a discussion of the usage of our metric for various resource allocation and adaptation strategies.
Resumo:
Most current-generation Wireless Sensor Network (WSN) nodes are equipped with multiple sensors of various types, and therefore support for multi-tasking and multiple concurrent applications is becoming increasingly common. This trend has been fostering the design of WSNs allowing several concurrent users to deploy applications with dissimilar requirements. In this paper, we extend the advantages of a holistic programming scheme by designing a novel compiler-assisted scheduling approach (called REIS) able to identify and eliminate redundancies across applications. To achieve this useful high-level optimization, we model each user application as a linear sequence of executable instructions. We show how well-known string-matching algorithms such as the Longest Common Subsequence (LCS) and the Shortest Common Super-sequence (SCS) can be used to produce an optimal merged monolithic sequence of the deployed applications that takes into account embedded scheduling information. We show that our approach can help in achieving about 60% average energy savings in processor usage compared to the normal execution of concurrent applications.
Resumo:
Wind energy is considered a hope in future as a clean and sustainable energy, as can be seen by the growing number of wind farms installed all over the world. With the huge proliferation of wind farms, as an alternative to the traditional fossil power generation, the economic issues dictate the necessity of monitoring systems to optimize the availability and profits. The relatively high cost of operation and maintenance associated to wind power is a major issue. Wind turbines are most of the time located in remote areas or offshore and these factors increase the referred operation and maintenance costs. Good maintenance strategies are needed to increase the health management of wind turbines. The objective of this paper is to show the application of neural networks to analyze all the wind turbine information to identify possible future failures, based on previous information of the turbine.
Resumo:
All every day activities take place in space. And it is upon this that all information and knowledge revolve. The latter are the key elements in the organisation of territories. Their creation, use and distribution should therefore occur in a balanced way throughout the whole territory in order to allow all individuals to participate in an egalitarian society, in which the flow of knowledge can take precedence over the flow of interests. The information society depends, to a large extent, on the technological capacity to disseminate information and, consequently, the knowledge throughout territory, thereby creating conditions which allow a more balanced development, from the both the social and economic points of view thus avoiding the existence of info-exclusion territories. Internet should therefore be considered more than a mere technology, given that its importance goes well beyond the frontiers of culture and society. It is already a part of daily life and of the new forms of thinking and transmitting information, thus making it a basic necessity essential, for a full socio-economic development. Its role as a platform of creation and distribution of content is regarded as an indispensable element for education in today’s society, since it makes information a much more easily acquired benefit.”…in the same way that the new technologies of generation and distribution of energy allowed factories and large companies to establish themselves as the organisational bases of industrial society, so the internet today constitutes the technological base of the organisational form that characterises the Information Era: the network” (CASTELLS, 2004:15). The changes taking place today in regional and urban structures are increasingly more evident due to a combination of factors such as faster means of transport, more efficient telecommunications and other cheaper and more advanced technologies of information and knowledge. Although their impact on society is obvious, society itself also has a strong influence on the evolution of these technologies. And although physical distance has lost much of the responsibility it had towards explaining particular phenomena of the economy and of society, other aspects such as telecommunications, new forms of mobility, the networks of innovation, the internet, cyberspace, etc., have become more important, and are the subject of study and profound analysis. The science of geographical information, allows, in a much more rigorous way, the analysis of problems thus integrating in a much more balanced way, the concepts of place, of space and of time. Among the traditional disciplines that have already found their place in this process of research and analysis, we can give special attention to a geography of new spaces, which, while not being a geography of ‘innovation’, nor of the ‘Internet’, nor even ‘virtual’, which can be defined as one of the ‘Information Society’, encompassing not only the technological aspects but also including a socio-economic approach. According to the last European statistical data, Portugal shows a deficit in terms of information and knowledge dissemination among its European partners. Some of the causes are very well identified - low levels of scholarship, weak investments on innovation and R&D (both private and public sector) - but others seem to be hidden behind socio-economical and technological factors. So, the justification of Portugal as the case study appeared naturally, on a difficult quest to find the major causes to territorial asymmetries. The substantial amount of data needed for this work was very difficult to obtain and for the islands of Madeira and Azores was insufficient, so only Continental Portugal was considered for this study. In an effort to understand the various aspects of the Geography of the Information Society and bearing in mind the increasing generalised use of information technologies together with the range of technologies available for the dissemination of information, it is important to: (i) Reflect on the geography of the new socio-technological spaces. (ii) Evaluate the potential for the dissemination of information and knowledge through the selection of variables that allow us to determine the dynamic of a given territory or region; (iii) Define a Geography of the Information Society in Continental Portugal.