18 resultados para data validation
em CiencIPCA - Instituto Politécnico do Cávado e do Ave, Portugal
Resumo:
The progressive aging of the population requires new kinds of social and medical intervention and the availability of different services provided to the elder population. New applications have been developed and some services are now provided at home, allowing the older people to stay home instead of having to stay in hospitals. But an adequate response to the needs of the users will imply a high percentage of use of personal data and information, including the building up and maintenance of user profiles, feeding the systems with the data and information needed for a proactive intervention in scheduling of events in which the user may be involved. Fundamental Rights may be at stake, so a legal analysis must also be considered.
Resumo:
Wireless medical systems are comprised of four stages, namely the medical device, the data transport, the data collection and the data evaluation stages. Whereas the performance of the first stage is highly regulated, the others are not. This paper concentrates on the data transport stage and argues that it is necessary to establish standardized tests to be used by medical device manufacturers to provide comparable results concerning the communication performance of the wireless networks used to transport medical data. Besides, it suggests test parameters and procedures to be used to produce comparable communication performance results.
Resumo:
The increasing availability of mobility data and the awareness of its importance and value have been motivating many researchers to the development of models and tools for analyzing movement data. This paper presents a brief survey of significant research works about modeling, processing and visualization of data about moving objects. We identified some key research fields that will provide better features for online analysis of movement data. As result of the literature review, we suggest a generic multi-layer architecture for the development of an online analysis processing software tool, which will be used for the definition of the future work of our team.
Resumo:
More and more current software systems rely on non trivial coordination logic for combining autonomous services typically running on different platforms and often owned by different organizations. Often, however, coordination data is deeply entangled in the code and, therefore, difficult to isolate and analyse separately. COORDINSPECTOR is a software tool which combines slicing and program analysis techniques to isolate all coordination elements from the source code of an existing application. Such a reverse engineering process provides a clear view of the actually invoked services as well as of the orchestration patterns which bind them together. The tool analyses Common Intermediate Language (CIL) code, the native language of Microsoft .Net Framework. Therefore, the scope of application of COORDINSPECTOR is quite large: potentially any piece of code developed in any of the programming languages which compiles to the .Net Framework. The tool generates graphical representations of the coordination layer together and identifies the underlying business process orchestrations, rendering them as Orc specifications
Resumo:
Pectus Carinatum (PC) is a chest deformity consisting on the anterior protrusion of the sternum and adjacent costal cartilages. Non-operative corrections, such as the orthotic compression brace, require previous information of the patient chest surface, to improve the overall brace fit. This paper focuses on the validation of the Kinect scanner for the modelling of an orthotic compression brace for the correction of Pectus Carinatum. To this extent, a phantom chest wall surface was acquired using two scanner systems – Kinect and Polhemus FastSCAN – and compared through CT. The results show a RMS error of 3.25mm between the CT data and the surface mesh from the Kinect sensor and 1.5mm from the FastSCAN sensor
Resumo:
Abstract: in Portugal, and in much of the legal systems of Europe, «legal persons» are likely to be criminally responsibilities also for cybercrimes. Like for example the following crimes: «false information»; «damage on other programs or computer data»; «computer-software sabotage»; «illegitimate access»; «unlawful interception» and «illegitimate reproduction of protected program». However, in Portugal, have many exceptions. Exceptions to the «question of criminal liability» of «legal persons». Some «legal persons» can not be blamed for cybercrime. The legislature did not leave! These «legal persons» are v.g. the following («public entities»): legal persons under public law, which include the public business entities; entities utilities, regardless of ownership; or other legal persons exercising public powers. In other words, and again as an example, a Portuguese public university or a private concessionaire of a public service in Portugal, can not commit (in Portugal) any one of cybercrime pointed. Fair? Unfair. All laws should provide that all legal persons can commit cybercrimes. PS: resumo do artigo em inglês.
Resumo:
In this work, we consider the numerical solution of a large eigenvalue problem resulting from a finite rank discretization of an integral operator. We are interested in computing a few eigenpairs, with an iterative method, so a matrix representation that allows for fast matrix-vector products is required. Hierarchical matrices are appropriate for this setting, and also provide cheap LU decompositions required in the spectral transformation technique. We illustrate the use of freely available software tools to address the problem, in particular SLEPc for the eigensolvers and HLib for the construction of H-matrices. The numerical tests are performed using an astrophysics application. Results show the benefits of the data-sparse representation compared to standard storage schemes, in terms of computational cost as well as memory requirements.
Resumo:
Image segmentation is an ubiquitous task in medical image analysis, which is required to estimate morphological or functional properties of given anatomical targets. While automatic processing is highly desirable, image segmentation remains to date a supervised process in daily clinical practice. Indeed, challenging data often requires user interaction to capture the required level of anatomical detail. To optimize the analysis of 3D images, the user should be able to efficiently interact with the result of any segmentation algorithm to correct any possible disagreement. Building on a previously developed real-time 3D segmentation algorithm, we propose in the present work an extension towards an interactive application where user information can be used online to steer the segmentation result. This enables a synergistic collaboration between the operator and the underlying segmentation algorithm, thus contributing to higher segmentation accuracy, while keeping total analysis time competitive. To this end, we formalize the user interaction paradigm using a geometrical approach, where the user input is mapped to a non-cartesian space while this information is used to drive the boundary towards the position provided by the user. Additionally, we propose a shape regularization term which improves the interaction with the segmented surface, thereby making the interactive segmentation process less cumbersome. The resulting algorithm offers competitive performance both in terms of segmentation accuracy, as well as in terms of total analysis time. This contributes to a more efficient use of the existing segmentation tools in daily clinical practice. Furthermore, it compares favorably to state-of-the-art interactive segmentation software based on a 3D livewire-based algorithm.
Resumo:
The selection of an Enterprise Resource Planning (ERP) system is one of the most sensitive and highest impact processes in the area of information systems and technologies, because it supports and integrates the whole business of an organization. Hence the importance of deciding the best solution in order to contribute to the organization's competitiveness in a global and increasingly demanding market. Therefore, it is essential to provide tools to support decision making, turning complex and often intangible decisions into simple and quantifiable scenarios. This study addressed the adoption of the Analytical Hierarchy Process (AHP) multicriteria decision method to support the selection of an ERP system. The literature review was the source used to obtain the set of the most relevant criteria to be considered in this decision, which were subsequently validated through systematic application of various surveys of experts and people related to the field of ERP systems. To support the application of AHP, according to the model obtained in the study, it was developed a web application that will be available to the general public. The responsible for the acquisition of ERP systems can use it to easily apply the AHP method based on validated decision model. On the other hand, the web application can be used as a validation tool, allowing collecting data for future developments of the decision model.
Resumo:
In this paper, we present a method for estimating local thickness distribution in nite element models, applied to injection molded and cast engineering parts. This method features considerable improved performance compared to two previously proposed approaches, and has been validated against thickness measured by di erent human operators. We also demonstrate that the use of this method for assigning a distribution of local thickness in FEM crash simulations results in a much more accurate prediction of the real part performance, thus increasing the bene ts of computer simulations in engineering design by enabling zero-prototyping and thus reducing product development costs. The simulation results have been compared to experimental tests, evidencing the advantage of the proposed method. Thus, the proposed approach to consider local thickness distribution in FEM crash simulations has high potential on the product development process of complex and highly demanding injection molded and casted parts and is currently being used by Ford Motor Company.
Resumo:
Websites are, nowadays, the face of institutions, but they are often neglected, especially when it comes to contents. In the present paper, we put forth an investigation work whose final goal is the development of a model for the measurement of data quality in institutional websites for health units. To that end, we have carried out a bibliographic review of the available approaches for the evaluation of website content quality, in order to identify the most recurrent dimensions and the attributes, and we are currently carrying out a Delphi Method process, presently in its second stage, with the purpose of reaching an adequate set of attributes for the measurement of content quality.
Resumo:
This article presents a research work, the goal of which was to achieve a model for the evaluation of data quality in institutional websites of health units in a broad and balanced way. We have carried out a literature review of the available approaches for the evaluation of website content quality, in order to identify the most recurrent dimensions and the attributes, and we have also carried out a Delphi method process with experts in order to reach an adequate set of attributes and their respective weights for the measurement of content quality. The results obtained revealed a high level of consensus among the experts who participated in the Delphi process. On the other hand, the different statistical analysis and techniques implemented are robust and attach confidence to our results and consequent model obtained.
Resumo:
Pectus Carinatum (PC) is a chest deformity consisting on the anterior protrusion of the sternum and adjacent costal cartilages. Non-operative corrections, such as the orthotic compression brace, require previous information of the patient chest surface, to improve the overall brace fit. This paper focuses on the validation of the Kinect scanner for the modelling of an orthotic compression brace for the correction of Pectus Carinatum. To this extent, a phantom chest wall surface was acquired using two scanner systems – Kinect and Polhemus FastSCAN – and compared through CT. The results show a RMS error of 3.25mm between the CT data and the surface mesh from the Kinect sensor and 1.5mm from the FastSCAN sensor.
Resumo:
Minimally invasive cardiovascular interventions guided by multiple imaging modalities are rapidly gaining clinical acceptance for the treatment of several cardiovascular diseases. These images are typically fused with richly detailed pre-operative scans through registration techniques, enhancing the intra-operative clinical data and easing the image-guided procedures. Nonetheless, rigid models have been used to align the different modalities, not taking into account the anatomical variations of the cardiac muscle throughout the cardiac cycle. In the current study, we present a novel strategy to compensate the beat-to-beat physiological adaptation of the myocardium. Hereto, we intend to prove that a complete myocardial motion field can be quickly recovered from the displacement field at the myocardial boundaries, therefore being an efficient strategy to locally deform the cardiac muscle. We address this hypothesis by comparing three different strategies to recover a dense myocardial motion field from a sparse one, namely, a diffusion-based approach, thin-plate splines, and multiquadric radial basis functions. Two experimental setups were used to validate the proposed strategy. First, an in silico validation was carried out on synthetic motion fields obtained from two realistic simulated ultrasound sequences. Then, 45 mid-ventricular 2D sequences of cine magnetic resonance imaging were processed to further evaluate the different approaches. The results showed that accurate boundary tracking combined with dense myocardial recovery via interpolation/ diffusion is a potentially viable solution to speed up dense myocardial motion field estimation and, consequently, to deform/compensate the myocardial wall throughout the cardiac cycle. Copyright © 2015 John Wiley & Sons, Ltd.
Resumo:
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant’s pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant’s pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant’s main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant’s pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67±34μm and 108μm, and angular misfits of 0.15±0.08º and 1.4º, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants’ pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.