985 resultados para Predictive mean matching imputation
Resumo:
A vision system is applied to full-field displacements and deformation measurements in solid mechanics. A speckle like pattern is preliminary formed on the surface under investigation. To determine displacements field of one speckle image with respect to a reference speckle image, sub-images, referred to Zones Of Interest (ZOI) are considered. The field is obtained by matching a ZOI in the reference image with the respective ZOI in the moved image. Two image processing techniques are used for implementing the matching procedure: – cross correlation function and minimum mean square error (MMSE) of the ZOI intensity distribution. The two algorithms are compared and the influence of the ZOI size on the accuracy of measurements is studied.
Resumo:
Big data comes in various ways, types, shapes, forms and sizes. Indeed, almost all areas of science, technology, medicine, public health, economics, business, linguistics and social science are bombarded by ever increasing flows of data begging to be analyzed efficiently and effectively. In this paper, we propose a rough idea of a possible taxonomy of big data, along with some of the most commonly used tools for handling each particular category of bigness. The dimensionality p of the input space and the sample size n are usually the main ingredients in the characterization of data bigness. The specific statistical machine learning technique used to handle a particular big data set will depend on which category it falls in within the bigness taxonomy. Large p small n data sets for instance require a different set of tools from the large n small p variety. Among other tools, we discuss Preprocessing, Standardization, Imputation, Projection, Regularization, Penalization, Compression, Reduction, Selection, Kernelization, Hybridization, Parallelization, Aggregation, Randomization, Replication, Sequentialization. Indeed, it is important to emphasize right away that the so-called no free lunch theorem applies here, in the sense that there is no universally superior method that outperforms all other methods on all categories of bigness. It is also important to stress the fact that simplicity in the sense of Ockham’s razor non-plurality principle of parsimony tends to reign supreme when it comes to massive data. We conclude with a comparison of the predictive performance of some of the most commonly used methods on a few data sets.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnoloigia, 2016.
Resumo:
Chaque année, le piratage mondial de la musique coûte plusieurs milliards de dollars en pertes économiques, pertes d’emplois et pertes de gains des travailleurs ainsi que la perte de millions de dollars en recettes fiscales. La plupart du piratage de la musique est dû à la croissance rapide et à la facilité des technologies actuelles pour la copie, le partage, la manipulation et la distribution de données musicales [Domingo, 2015], [Siwek, 2007]. Le tatouage des signaux sonores a été proposé pour protéger les droit des auteurs et pour permettre la localisation des instants où le signal sonore a été falsifié. Dans cette thèse, nous proposons d’utiliser la représentation parcimonieuse bio-inspirée par graphe de décharges (spikegramme), pour concevoir une nouvelle méthode permettant la localisation de la falsification dans les signaux sonores. Aussi, une nouvelle méthode de protection du droit d’auteur. Finalement, une nouvelle attaque perceptuelle, en utilisant le spikegramme, pour attaquer des systèmes de tatouage sonore. Nous proposons tout d’abord une technique de localisation des falsifications (‘tampering’) des signaux sonores. Pour cela nous combinons une méthode à spectre étendu modifié (‘modified spread spectrum’, MSS) avec une représentation parcimonieuse. Nous utilisons une technique de poursuite perceptive adaptée (perceptual marching pursuit, PMP [Hossein Najaf-Zadeh, 2008]) pour générer une représentation parcimonieuse (spikegramme) du signal sonore d’entrée qui est invariante au décalage temporel [E. C. Smith, 2006] et qui prend en compte les phénomènes de masquage tels qu’ils sont observés en audition. Un code d’authentification est inséré à l’intérieur des coefficients de la représentation en spikegramme. Puis ceux-ci sont combinés aux seuils de masquage. Le signal tatoué est resynthétisé à partir des coefficients modifiés, et le signal ainsi obtenu est transmis au décodeur. Au décodeur, pour identifier un segment falsifié du signal sonore, les codes d’authentification de tous les segments intacts sont analysés. Si les codes ne peuvent être détectés correctement, on sait qu’alors le segment aura été falsifié. Nous proposons de tatouer selon le principe à spectre étendu (appelé MSS) afin d’obtenir une grande capacité en nombre de bits de tatouage introduits. Dans les situations où il y a désynchronisation entre le codeur et le décodeur, notre méthode permet quand même de détecter des pièces falsifiées. Par rapport à l’état de l’art, notre approche a le taux d’erreur le plus bas pour ce qui est de détecter les pièces falsifiées. Nous avons utilisé le test de l’opinion moyenne (‘MOS’) pour mesurer la qualité des systèmes tatoués. Nous évaluons la méthode de tatouage semi-fragile par le taux d’erreur (nombre de bits erronés divisé par tous les bits soumis) suite à plusieurs attaques. Les résultats confirment la supériorité de notre approche pour la localisation des pièces falsifiées dans les signaux sonores tout en préservant la qualité des signaux. Ensuite nous proposons une nouvelle technique pour la protection des signaux sonores. Cette technique est basée sur la représentation par spikegrammes des signaux sonores et utilise deux dictionnaires (TDA pour Two-Dictionary Approach). Le spikegramme est utilisé pour coder le signal hôte en utilisant un dictionnaire de filtres gammatones. Pour le tatouage, nous utilisons deux dictionnaires différents qui sont sélectionnés en fonction du bit d’entrée à tatouer et du contenu du signal. Notre approche trouve les gammatones appropriés (appelés noyaux de tatouage) sur la base de la valeur du bit à tatouer, et incorpore les bits de tatouage dans la phase des gammatones du tatouage. De plus, il est montré que la TDA est libre d’erreur dans le cas d’aucune situation d’attaque. Il est démontré que la décorrélation des noyaux de tatouage permet la conception d’une méthode de tatouage sonore très robuste. Les expériences ont montré la meilleure robustesse pour la méthode proposée lorsque le signal tatoué est corrompu par une compression MP3 à 32 kbits par seconde avec une charge utile de 56.5 bps par rapport à plusieurs techniques récentes. De plus nous avons étudié la robustesse du tatouage lorsque les nouveaux codec USAC (Unified Audion and Speech Coding) à 24kbps sont utilisés. La charge utile est alors comprise entre 5 et 15 bps. Finalement, nous utilisons les spikegrammes pour proposer trois nouvelles méthodes d’attaques. Nous les comparons aux méthodes récentes d’attaques telles que 32 kbps MP3 et 24 kbps USAC. Ces attaques comprennent l’attaque par PMP, l’attaque par bruit inaudible et l’attaque de remplacement parcimonieuse. Dans le cas de l’attaque par PMP, le signal de tatouage est représenté et resynthétisé avec un spikegramme. Dans le cas de l’attaque par bruit inaudible, celui-ci est généré et ajouté aux coefficients du spikegramme. Dans le cas de l’attaque de remplacement parcimonieuse, dans chaque segment du signal, les caractéristiques spectro-temporelles du signal (les décharges temporelles ;‘time spikes’) se trouvent en utilisant le spikegramme et les spikes temporelles et similaires sont remplacés par une autre. Pour comparer l’efficacité des attaques proposées, nous les comparons au décodeur du tatouage à spectre étendu. Il est démontré que l’attaque par remplacement parcimonieux réduit la corrélation normalisée du décodeur de spectre étendu avec un plus grand facteur par rapport à la situation où le décodeur de spectre étendu est attaqué par la transformation MP3 (32 kbps) et 24 kbps USAC.
Resumo:
The COVID-19 pandemic, sparked by the SARS-CoV-2 virus, stirred global comparisons to historical pandemics. Initially presenting a high mortality rate, it later stabilized globally at around 0.5-3%. Patients manifest a spectrum of symptoms, necessitating efficient triaging for appropriate treatment strategies, ranging from symptomatic relief to antivirals or monoclonal antibodies. Beyond traditional approaches, emerging research suggests a potential link between COVID-19 severity and alterations in gut microbiota composition, impacting inflammatory responses. However, most studies focus on severe hospitalized cases without standardized criteria for severity. Addressing this gap, the first study in this thesis spans diverse COVID-19 severity levels, utilizing 16S rRNA amplicon sequencing on fecal samples from 315 subjects. The findings highlight significant microbiota differences correlated with severity. Machine learning classifiers, including a multi-layer convoluted neural network, demonstrated the potential of microbiota compositional data to predict patient severity, achieving an 84.2% mean balanced accuracy starting one week post-symptom onset. These preliminary results underscore the gut microbiota's potential as a biomarker in clinical decision-making for COVID-19. The second study delves into mild COVID-19 cases, exploring their implications for ‘long COVID’ or Post-Acute COVID-19 Syndrome (PACS). Employing longitudinal analysis, the study unveils dynamic shifts in microbial composition during the acute phase, akin to severe cases. Innovative techniques, including network approaches and spline-based longitudinal analysis, were deployed to assess microbiota dynamics and potential associations with PACS. The research suggests that even in mild cases, similar mechanisms to hospitalized patients are established regarding changes in intestinal microbiota during the acute phase of the infection. These findings lay the foundation for potential microbiota-targeted therapies to mitigate inflammation, potentially preventing long COVID symptoms in the broader population. In essence, these studies offer valuable insights into the intricate relationships between COVID-19 severity, gut microbiota, and the potential for innovative clinical applications.
Resumo:
This thesis aims to illustrate the construction of a mathematical model of a hydraulic system, oriented to the design of a model predictive control (MPC) algorithm. The modeling procedure starts with the basic formulation of a piston-servovalve system. The latter is a complex non linear system with some unknown and not measurable effects that constitute a challenging problem for the modeling procedure. The first level of approximation for system parameters is obtained basing on datasheet informations, provided workbench tests and other data from the company. Then, to validate and refine the model, open-loop simulations have been made for data matching with the characteristics obtained from real acquisitions. The final developed set of ODEs captures all the main peculiarities of the system despite some characteristics due to highly varying and unknown hydraulic effects, like the unmodeled resistive elements of the pipes. After an accurate analysis, since the model presents many internal complexities, a simplified version is presented. The latter is used to linearize and discretize correctly the non linear model. Basing on that, a MPC algorithm for reference tracking with linear constraints is implemented. The results obtained show the potential of MPC in this kind of industrial applications, thus a high quality tracking performances while satisfying state and input constraints. The increased robustness and flexibility are evident with respect to the standard control techniques, such as PID controllers, adopted for these systems. The simulations for model validation and the controlled system have been carried out in a Python code environment.
Resumo:
Phase I trials use a small number of patients to define a maximum tolerated dose (MTD) and the safety of new agents. We compared data from phase I and registration trials to determine whether early trials predicted later safety and final dose. We searched the U.S. Food and Drug Administration (FDA) website for drugs approved in nonpediatric cancers (January 1990-October 2012). The recommended phase II dose (R2PD) and toxicities from phase I were compared with doses and safety in later trials. In 62 of 85 (73%) matched trials, the dose from the later trial was within 20% of the RP2D. In a multivariable analysis, phase I trials of targeted agents were less predictive of the final approved dose (OR, 0.2 for adopting ± 20% of the RP2D for targeted vs. other classes; P = 0.025). Of the 530 clinically relevant toxicities in later trials, 70% (n = 374) were described in phase I. A significant relationship (P = 0.0032) between increasing the number of patients in phase I (up to 60) and the ability to describe future clinically relevant toxicities was observed. Among 28,505 patients in later trials, the death rate that was related to drug was 1.41%. In conclusion, dosing based on phase I trials was associated with a low toxicity-related death rate in later trials. The ability to predict relevant toxicities correlates with the number of patients on the initial phase I trial. The final dose approved was within 20% of the RP2D in 73% of assessed trials.
Resumo:
The search for an Alzheimer's disease (AD) biomarker is one of the most relevant contemporary research topics due to the high prevalence and social costs of the disease. Functional connectivity (FC) of the default mode network (DMN) is a plausible candidate for such a biomarker. We evaluated 22 patients with mild AD and 26 age- and gender-matched healthy controls. All subjects underwent resting functional magnetic resonance imaging (fMRI) in a 3.0 T scanner. To identify the DMN, seed-based FC of the posterior cingulate was calculated. We also measured the sensitivity/specificity of the method, and verified a correlation with cognitive performance. We found a significant difference between patients with mild AD and controls in average z-scores: DMN, whole cortical positive (WCP) and absolute values. DMN individual values showed a sensitivity of 77.3% and specificity of 70%. DMN and WCP values were correlated to global cognition and episodic memory performance. We showed that individual measures of DMN connectivity could be considered a promising method to differentiate AD, even at an early phase, from normal aging. Further studies with larger numbers of participants, as well as validation of normal values, are needed for more definitive conclusions.
Resumo:
Obstructive lung diseases of different etiologies present with progressive peripheral airway involvement. The peripheral airways, known as the silent lung zone, are not adequately evaluated with conventional function tests. The principle of gas washout has been used to detect pulmonary ventilation inhomogeneity and to estimate the location of the underlying disease process. Volumetric capnography (VC) analyzes the pattern of CO2 elimination as a function of expired volume. To measure normalized phase 3 slopes with VC in patients with non-cystic fibrosis bronchiectasis (NCB) and in bronchitic patients with chronic obstructive pulmonary disease (COPD) in order to compare the slopes obtained for the groups. NCB and severe COPD were enrolled sequentially from an outpatient clinic (Hospital of the State University of Campinas). A control group was established for the NCB group, paired by sex and age. All subjects performed spirometry, VC, and the 6-Minute Walk Test (6MWT). Two comparisons were made: NCB group versus its control group, and NCB group versus COPD group. The project was approved by the ethical committee of the institution. Statistical tests used were Wilcoxon or Student's t-test; P<0.05 was considered to be a statistically significant difference. Concerning the NCB group (N=20) versus the control group (N=20), significant differences were found in body mass index and in several functional variables (spirometric, VC, 6MWT) with worse results observed in the NCB group. In the comparison between the COPD group (N=20) versus the NCB group, although patients with COPD had worse spirometric and 6MWT values, the capnographic variables mean phase 2 slope (Slp2), mean phase 3 slope normalized by the mean expiratory volume, or mean phase 3 slope normalized by the end-tidal CO2 concentration were similar. These findings may indicate that the gas elimination curves are not sensitive enough to monitor the severity of structural abnormalities. The role of normalized phase 3 slope may be worth exploring as a more sensitive index of small airway disease, even though it may not be equally sensitive in discriminating the severity of the alterations.
Resumo:
The treatment of subglottic stenosis in children remains a challenge for the otorhinolaryngologist, and may involve both endoscopic and open surgery. To report the experience of two tertiary facilities in the treatment of acquired subglottic stenosis in children with balloon laryngoplasty, and to identify predictive factors for success of the technique and its complications. Descriptive, prospective study of children diagnosed with acquired subglottic stenosis and submitted to balloon laryngoplasty as primary treatment. Balloon laryngoplasty was performed in 37 children with an average age of 22.5 months; 24 presented chronic subglottic stenosis and 13 acute subglottic stenosis. Success rates were 100% for acute subglottic stenosis and 32% for chronic subglottic stenosis. Success was significantly associated with acute stenosis, initial grade of stenosis, children of a smaller age, and the absence of tracheostomy. Transitory dysphagia was the only complication observed in three children. Balloon laryngoplasty may be considered the first line of treatment for acquired subglottic stenosis. In acute cases, the success rate is 100%, and although the results are less promising in chronic cases, complications are not significant and the possibility of open surgery remains without prejudice.
Resumo:
OBJECTIVE: The aim of this study was to evaluate the role of angiotensin I, II and 1-7 on left ventricular hypertrophy of Wistar and spontaneously hypertensive rats submitted to sinoaortic denervation. METHODS: Ten weeks after sinoaortic denervation, hemodynamic and morphofunctional parameters were analyzed, and the left ventricle was dissected for biochemical analyses. RESULTS: Hypertensive groups (controls and denervated) showed an increase on mean blood pressure compared with normotensive ones (controls and denervated). Blood pressure variability was higher in denervated groups than in their respective controls. Left ventricular mass and collagen content were increased in the normotensive denervated and in both spontaneously hypertensive groups compared with Wistar controls. Both hypertensive groups presented a higher concentration of angiotensin II than Wistar controls, whereas angiotensin 1-7 concentration was decreased in the hypertensive denervated group in relation to the Wistar groups. There was no difference in angiotensin I concentration among groups. CONCLUSION: Our results suggest that not only blood pressure variability and reduced baroreflex sensitivity but also elevated levels of angiotensin II and a reduced concentration of angiotensin 1-7 may contribute to the development of left ventricular hypertrophy. These data indicate that baroreflex dysfunction associated with changes in the renin angiotensin system may be predictive factors of left ventricular hypertrophy and cardiac failure.
Resumo:
OBJECTIVE: To analyze the impact on human health of exposure to particulate matter emitted from burnings in the Brazilian Amazon region. METHODS: This was an ecological study using an environmental exposure indicator presented as the percentage of annual hours (AH%) of PM2.5 above 80 μg/m3. The outcome variables were the rates of hospitalization due to respiratory disease among children, the elderly and the intermediate age group, and due to childbirth. Data were obtained from the National Space Research Institute and the Ministry of Health for all of the microregions of the Brazilian Amazon region, for the years 2004 and 2005. Multiple regression models for the outcome variables in relation to the predictive variable AH% of PM2.5 above 80 μg/m3 were analyzed. The Human Development Index (HDI) and mean number of complete blood counts per 100 inhabitants in the Brazilian Amazon region were the control variables in the regression analyses. RESULTS: The association of the exposure indicator (AH%) was higher for the elderly than for other age groups (β = 0.10). For each 1% increase in the exposure indicator there was an increase of 8% in child hospitalization, 10% in hospitalization of the elderly, and 5% for the intermediate age group, even after controlling for HDI and mean number of complete blood counts. No association was found between the AH% and hospitalization due to childbirth. CONCLUSIONS: The indicator of atmospheric pollution showed an association with occurrences of respiratory diseases in the Brazilian Amazon region, especially in the more vulnerable age groups. This indicator may be used to assess the effects of forest burning on human health.
Resumo:
A expansão da tríplice continência em unidades com quatro ou mais elementos abriu novas perspectivas para a compreensão de comportamentos complexos, como a emergência de respostas que derivam da formação de classes de estímulos equivalentes e que modelam comportamentos simbólicos e conceituais. Na investigação experimental, o procedimento de matching to sample tem sido frequentemente empregado para estabelecer discriminações condicionais. Em particular, a obtenção do matching de identidade generalizado é considerada demonstrativa da aquisição dos conceitos de igualdade e diferença. Segundo argumentamos, o fato de se buscar a compreensão desses conceitos a partir de processos discriminativos condicionais pode ter sido responsável pelos frequentes fracassos em demonstrá-los em sujeitos não humanos. A falta de correspondência entre os processos discriminativos responsáveis por estabelecer a relação de reflexividade entre estímulos que formam classes equivalentes e o matching de identidade generalizado, nesse sentido, é aqui revista ao longo de estudos empíricos e discutida com respeito às suas implicações.
Resumo:
OBJETIVO: Investigar a relação entre adequação da oferta energética e mortalidade na unidade de terapia intensiva em pacientes sob terapia nutricional enteral exclusiva. MÉTODOS: Estudo observacional prospectivo conduzido em uma unidade de terapia intensiva em 2008 e 2009. Foram incluídos pacientes >18 anos que receberam terapia nutricional enteral por >72h. A adequação da oferta de energia foi estimada pela razão administrado/prescrito. Para a investigação da relação entre variáveis preditoras (adequação da oferta energética, escore APACHE II, sexo, idade e tempo de permanência na unidade de terapia intensiva e o desfecho mortalidade na unidade de terapia intensiva, utilizou-se o modelo de regressão logística não condicional. RESULTADOS: Foram incluídos 63 pacientes (média 58 anos, mortalidade 27%), 47,6% dos quais receberam mais de 90% da energia prescrita (adequação média 88,2%). O balanço energético médio foi de -190 kcal/dia. Observou-se associação significativa entre ocorrência de óbito e as variáveis idade e tempo de permanência na unidade de terapia intensiva, após a retirada das variáveis adequação da oferta energética, APACHE II e sexo durante o processo de modelagem. CONCLUSÃO: A adequação da oferta energética não influenciou a taxa de mortalidade na unidade de terapia intensiva. Protocolos de infusão de nutrição enteral seguidos criteriosamente, com adequação administrado/prescrito acima de 70%, parecem ser suficientes para não interferirem na mortalidade. Dessa forma, pode-se questionar a obrigatoriedade de atingir índices próximos a 100%, considerando a elevada frequência com que ocorrem interrupções no fornecimento de dieta enteral devido a intolerância gastrointestinal e jejuns para exames e procedimentos. Pesquisas futuras poderão identificar a meta ideal de adequação da oferta energética que resulte em redução significativa de complicações, mortalidade e custos.
Resumo:
PURPOSE: To analyze the usefulness of the weight gain/height gain ratio from birth to two and three years of age as a predictive risk indicator of excess weight at preschool age. METHODS: The weight and height/length of 409 preschool children at daycare centers were measured according to internationally recommended rules. The weight values and body mass indices of the children were transformed into a z-score per the standard method described by the World Health Organization. The Pearson correlation coefficients (rP) and the linear regressions between the anthropometric parameters and the body mass index z-scores of preschool children were statistically analyzed (alpha = 0.05). RESULTS: The mean age of the study population was 3.2 years (± 0.3 years). The prevalence of excess weight was 28.8%, and the prevalence of overweight and obesity was 8.8%. The correlation coefficients between the body mass index z-scores of the preschool children and the birth weights or body mass indices at birth were low (0.09 and 0.10, respectively). There was a high correlation coefficient (rP = 0.79) between the mean monthly gain of weight and the body mass index z-score of preschool children. A higher coefficient (rP = 0.93) was observed between the ratio of the mean weight gain per height gain (g/cm) and the preschool children body mass index z-score. The coefficients and their differences were statistically significant. CONCLUSION: Regardless of weight or length at birth, the mean ratio between the weight gain per g/cm of height growth from birth presented a strong correlation with the body mass index of preschool children. These results suggest that this ratio may be a good indicator of the risk of excess weight and obesity in preschool-aged children.