970 resultados para Multivariate data
Resumo:
Introdução: O parto pretermo (PPT) é um dos principais determinantes da morbimortalidade neonatal, acarretando consequências adversas para saúde. As causas são multifatoriais, sendo a infecção intrauterina a razão mais provável para explicar a maioria destes desfechos. Acredita-se que a infecção por Chlamydia trachomatis (CT) também esteja envolvida no PPT e rotura prematura de membranas (ROPREMA). Objetivo: Determinar a prevalência de CT em parturientes e os possíveis fatores de risco relacionados com os casos de partos prematuros atendidos no Hospital Universitário Cassiano Antonio Moraes. Métodos: Estudo de corte transversal, realizado entre parturientes que apresentaram PPT em um Hospital Universitário em Vitória - ES, entre junho de 2012 e agosto de 2013. As participantes responderam a um questionário contendo dados sóciodemográficos, comportamentais e clínicos. Foi coletada uma amostra de urina para rastreio de CT usando reação em cadeia da polimerase. Resultados: A prevalência de PPT durante o período do estudo foi de 26%. Um total de 378 casos de PPT foram registrados, entre eles 323 mulheres foram testadas para o CT; quarenta e cinco (13,9%) tiveram um resultado positivo, sendo que 31,6% possuiam até 24 anos e as mulheres infectadas pela CT eram mais jovens do que as demais (p = 0,022). Um total de 76,2% eram casadas/em união estável, e CT foi mais frequente entre as solteiras (p = 0,018); 16,7% relataram primeira relação sexual com menos de 14 anos de idade. As causas de PPT foram materno-fetais em 40,9%, ROPREMA em 29,7% e trabalho de parto prematuro em 29,4%. Na análise multivariada, ser casada foi um fator de proteção [OR = 12:48 (IC 95%: 0,24-0,97)]. Nenhuma das demais características foram associadas com a infecção por CT. Conclusões: Este estudo evidencia uma alta prevalência de infecção por CT entre parturientes com PPT. Essa alta prevalência reforça a necessidade da definição de estratégias de rastreamento e assistência durante o pré-natal.
Resumo:
A combinação da agricultura de precisão e do Sistema Integrado de Recomendação Foliar (DRIS) possibilita monitorar espacialmente o balanço nutricional dos cafezais para fornecer recomendações de adubação mais equilibradas e mais ajustadas economicamente. O objetivo deste trabalho foi avaliar a variabilidade espacial do estado nutricional do cafeeiro conilon, utilizando o Índice de Balanço Nutricional (IBN) e sua relação com a produtividade. A produtividade das plantas em cada ponto amostral foi determinada e construiu-se o seu mapa considerando a variabilidade espacial; determinou-se o Índice de Equilíbrio Nutricional (IBN) das plantas em cada ponto amostral e construiu-se o seu mapa; e utilizou-se a análise de componentes principais (ACP) para estimar o IBN do cafeeiro por cokrigagem. Os dados do cafeeiro conilon foram coletados em fazenda experimental, no município de Cachoeiro de Itapemirim-ES. O IBN do cafeeiro e a sua produtividade foram analisados por meio de geoestatística, com base nos modelos e parâmetros dos semivariogramas, utilizando o método de interpolação krigagem ordinária para estimar valores para locais não amostrados. O índice de Balanço Nutricional da lavoura do cafeeiro conilon apresentou dependência espacial, porém não apresentou correlação linear e nem espacial com a produtividade. A lavoura em estudo se encontra em desequilíbrio nutricional, sendo que entre os macronutrientes, o Potássio foi o que apresentou maior desequilíbrio na área, entre os micronutrientes, o Zinco e o Ferro foram os que apresentaram menores concentrações nas folhas. A confecção dos mapas possibilitou a distinção de regiões com maior e menor desequilíbrio nutricional e produtividade, o que possibilita adotar o manejo de forma diferenciada e localizada. A análise multivariada baseada em componentes principais fornece componentes com alta correlação com as variáveis originais P, Ca, Zn , Cu, K e B. A cokrigagem utilizando as componentes principais permite estimar o IBN e a produtividade da área.
Resumo:
The progressive aging of the population requires new kinds of social and medical intervention and the availability of different services provided to the elder population. New applications have been developed and some services are now provided at home, allowing the older people to stay home instead of having to stay in hospitals. But an adequate response to the needs of the users will imply a high percentage of use of personal data and information, including the building up and maintenance of user profiles, feeding the systems with the data and information needed for a proactive intervention in scheduling of events in which the user may be involved. Fundamental Rights may be at stake, so a legal analysis must also be considered.
Resumo:
Wireless medical systems are comprised of four stages, namely the medical device, the data transport, the data collection and the data evaluation stages. Whereas the performance of the first stage is highly regulated, the others are not. This paper concentrates on the data transport stage and argues that it is necessary to establish standardized tests to be used by medical device manufacturers to provide comparable results concerning the communication performance of the wireless networks used to transport medical data. Besides, it suggests test parameters and procedures to be used to produce comparable communication performance results.
Resumo:
The increasing availability of mobility data and the awareness of its importance and value have been motivating many researchers to the development of models and tools for analyzing movement data. This paper presents a brief survey of significant research works about modeling, processing and visualization of data about moving objects. We identified some key research fields that will provide better features for online analysis of movement data. As result of the literature review, we suggest a generic multi-layer architecture for the development of an online analysis processing software tool, which will be used for the definition of the future work of our team.
Resumo:
This study aimed to evaluate the efficiency of simultaneous selection (selection indices) using estimated genetic gains in yellow passion fruit and to make a comparison between the methodologies of Mulamba & Mock and Elston. The study was conducted with 26 sib progenies of yellow passion fruit for intrinsic production characteristics including fruit number, fruit mass, fruit length and diameter, and for the fruit characteristics skin thickness, soluble solids and acidity. Two methodologies were applied: first, in the joint analysis of fruit characteristics and of intrinsic production characteristics in a single phase of selection; and second, in the analysis in two phases, in which priority was given to the intrinsic production characteristics in the first phase, and later, in the second phase, the best fruit characteristics were chosen among the progenies of the first phase. The analysis of variance was applied to the data to detect genetic variability among progenies. The Elston's selection indice was unable to provide distribution of genetic gains consistent with the purposes of the study, as it selected a single progeny of passion fruit. However, the index based on the sum of ranks of Mulamba & Mock was more suitable, as it provided a balanced distribution of gains, selecting a larger number of progenies. The methodology of selection using indices is advantageous in passion fruit, since it contributes to higher genetic gains for all the traits evaluated, and the selection in a single phase was proved efficient for progeny selection.
Resumo:
More and more current software systems rely on non trivial coordination logic for combining autonomous services typically running on different platforms and often owned by different organizations. Often, however, coordination data is deeply entangled in the code and, therefore, difficult to isolate and analyse separately. COORDINSPECTOR is a software tool which combines slicing and program analysis techniques to isolate all coordination elements from the source code of an existing application. Such a reverse engineering process provides a clear view of the actually invoked services as well as of the orchestration patterns which bind them together. The tool analyses Common Intermediate Language (CIL) code, the native language of Microsoft .Net Framework. Therefore, the scope of application of COORDINSPECTOR is quite large: potentially any piece of code developed in any of the programming languages which compiles to the .Net Framework. The tool generates graphical representations of the coordination layer together and identifies the underlying business process orchestrations, rendering them as Orc specifications
Resumo:
Abstract: in Portugal, and in much of the legal systems of Europe, «legal persons» are likely to be criminally responsibilities also for cybercrimes. Like for example the following crimes: «false information»; «damage on other programs or computer data»; «computer-software sabotage»; «illegitimate access»; «unlawful interception» and «illegitimate reproduction of protected program». However, in Portugal, have many exceptions. Exceptions to the «question of criminal liability» of «legal persons». Some «legal persons» can not be blamed for cybercrime. The legislature did not leave! These «legal persons» are v.g. the following («public entities»): legal persons under public law, which include the public business entities; entities utilities, regardless of ownership; or other legal persons exercising public powers. In other words, and again as an example, a Portuguese public university or a private concessionaire of a public service in Portugal, can not commit (in Portugal) any one of cybercrime pointed. Fair? Unfair. All laws should provide that all legal persons can commit cybercrimes. PS: resumo do artigo em inglês.
Resumo:
In this work, we consider the numerical solution of a large eigenvalue problem resulting from a finite rank discretization of an integral operator. We are interested in computing a few eigenpairs, with an iterative method, so a matrix representation that allows for fast matrix-vector products is required. Hierarchical matrices are appropriate for this setting, and also provide cheap LU decompositions required in the spectral transformation technique. We illustrate the use of freely available software tools to address the problem, in particular SLEPc for the eigensolvers and HLib for the construction of H-matrices. The numerical tests are performed using an astrophysics application. Results show the benefits of the data-sparse representation compared to standard storage schemes, in terms of computational cost as well as memory requirements.
Resumo:
Image segmentation is an ubiquitous task in medical image analysis, which is required to estimate morphological or functional properties of given anatomical targets. While automatic processing is highly desirable, image segmentation remains to date a supervised process in daily clinical practice. Indeed, challenging data often requires user interaction to capture the required level of anatomical detail. To optimize the analysis of 3D images, the user should be able to efficiently interact with the result of any segmentation algorithm to correct any possible disagreement. Building on a previously developed real-time 3D segmentation algorithm, we propose in the present work an extension towards an interactive application where user information can be used online to steer the segmentation result. This enables a synergistic collaboration between the operator and the underlying segmentation algorithm, thus contributing to higher segmentation accuracy, while keeping total analysis time competitive. To this end, we formalize the user interaction paradigm using a geometrical approach, where the user input is mapped to a non-cartesian space while this information is used to drive the boundary towards the position provided by the user. Additionally, we propose a shape regularization term which improves the interaction with the segmented surface, thereby making the interactive segmentation process less cumbersome. The resulting algorithm offers competitive performance both in terms of segmentation accuracy, as well as in terms of total analysis time. This contributes to a more efficient use of the existing segmentation tools in daily clinical practice. Furthermore, it compares favorably to state-of-the-art interactive segmentation software based on a 3D livewire-based algorithm.
Resumo:
In this paper, we present a method for estimating local thickness distribution in nite element models, applied to injection molded and cast engineering parts. This method features considerable improved performance compared to two previously proposed approaches, and has been validated against thickness measured by di erent human operators. We also demonstrate that the use of this method for assigning a distribution of local thickness in FEM crash simulations results in a much more accurate prediction of the real part performance, thus increasing the bene ts of computer simulations in engineering design by enabling zero-prototyping and thus reducing product development costs. The simulation results have been compared to experimental tests, evidencing the advantage of the proposed method. Thus, the proposed approach to consider local thickness distribution in FEM crash simulations has high potential on the product development process of complex and highly demanding injection molded and casted parts and is currently being used by Ford Motor Company.
Resumo:
Websites are, nowadays, the face of institutions, but they are often neglected, especially when it comes to contents. In the present paper, we put forth an investigation work whose final goal is the development of a model for the measurement of data quality in institutional websites for health units. To that end, we have carried out a bibliographic review of the available approaches for the evaluation of website content quality, in order to identify the most recurrent dimensions and the attributes, and we are currently carrying out a Delphi Method process, presently in its second stage, with the purpose of reaching an adequate set of attributes for the measurement of content quality.
Resumo:
This article presents a research work, the goal of which was to achieve a model for the evaluation of data quality in institutional websites of health units in a broad and balanced way. We have carried out a literature review of the available approaches for the evaluation of website content quality, in order to identify the most recurrent dimensions and the attributes, and we have also carried out a Delphi method process with experts in order to reach an adequate set of attributes and their respective weights for the measurement of content quality. The results obtained revealed a high level of consensus among the experts who participated in the Delphi process. On the other hand, the different statistical analysis and techniques implemented are robust and attach confidence to our results and consequent model obtained.
Resumo:
The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant’s pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant’s pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant’s main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant’s pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67±34μm and 108μm, and angular misfits of 0.15±0.08º and 1.4º, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants’ pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.
Resumo:
One of the current frontiers in the clinical management of Pectus Excavatum (PE) patients is the prediction of the surgical outcome prior to the intervention. This can be done through computerized simulation of the Nuss procedure, which requires an anatomically correct representation of the costal cartilage. To this end, we take advantage of the costal cartilage tubular structure to detect it through multi-scale vesselness filtering. This information is then used in an interactive 2D initialization procedure which uses anatomical maximum intensity projections of 3D vesselness feature images to efficiently initialize the 3D segmentation process. We identify the cartilage tissue centerlines in these projected 2D images using a livewire approach. We finally refine the 3D cartilage surface through region-based sparse field level-sets. We have tested the proposed algorithm in 6 noncontrast CT datasets from PE patients. A good segmentation performance was found against reference manual contouring, with an average Dice coefficient of 0.75±0.04 and an average mean surface distance of 1.69±0.30mm. The proposed method requires roughly 1 minute for the interactive initialization step, which can positively contribute to an extended use of this tool in clinical practice, since current manual delineation of the costal cartilage can take up to an hour.