982 resultados para Second Step


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Alginate microgels are widely used as delivery systems in food, cosmetics, and pharmaceutical industries for encapsulation and sustained release of hydrophilic compounds and cells. However, the encapsulation of lipophilic molecules inside these microgels remains a great challenge because of the complex oil-core matrix required. The present study describes an original two-step approach allowing the easy encapsulation of several oil microdroplets within alginate microgels. In the first step, stable oil microdroplets were formed by preparing an oil-in-water (O/W) Pickering emulsion. To stabilize this emulsion, we used two solid particles, namely the cotton cellulose nanocrystals (CNC) and calcium carbonate (CaCO3). It was observed that the surface of the oil microdroplets formed was totally covered by a CNC layer, whereas CaCO3 particles were adsorbed onto the cellulose layer. This solid CNC shell efficiently stabilized the oil microdroplets, preventing them from undesired coalescence. In the second step, oil microdroplets resulting from the Pickering emulsion were encapsulated within alginate microgels using microfluidics. Precisely, the outermost layer of oil microdroplets composed of CaCO3 particles was used to initiate alginate gelation inside the microfluidic device, following the internal gelation mode. The released Ca2+ ions induced the gel formation through physical cross-linking with alginate molecules. This innovative and easy to carry out two-step approach was successfully developed to fabricate monodisperse alginate microgels of 85 pm in diameter containing around 12 oil microdroplets of 15 mu m in diameter. These new oil-core alginate microgels represent an attractive system for encapsulation of lipophilic compounds such as vitamins, aroma compounds or anticancer drugs that could be applied in various domains including food, cosmetics, and medical applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Brazil is a country that is characterized by its low consumption of fish. With consumption records of 10.6 kg/ inhabitant/ year, it is lower than the recommended by the UN, that is 12 kg/ inhabitant/ year. The regular consumption of fish provides health gain for people and their introduction into the school feeding is an important strategy for the insertion of this food consumption habits in a population. In this context, the objective of this study was to understand the perception of fish with children from the public school system through the technical Projective Mapping (MP) and Association of Words (AP); and evaluate the acceptability of fish derivative in school meals. In the first instance with the intention to better understand the perception of children from different ages about the fish-based products, Projective Mapping techniques were applied through the use of food figures and word association. A total of 149 children from three public schools from Pato Branco, Paraná State, Brazil, took part in this study. Three groups of children aged 5-6, 7-8 and 9-10 years old were interviewed individually by six monitors experienced in applied sensory methods. Ten figures with healthy foods drawings (sushi, salad, fruit, fish, chicken), and less healthy foods (pizza, pudding, cake, hamburger, fries) were distributed to the children, who were asked to paste the figures in A3 sheet, so that the products they considered similar stayed near each other, and the ones considered very different stayed apart. After this, the children described the images and the image groups (Ultra Flash Profile). The results revealed that the MP technique was easily operated and understood by all the children and the use of images made its implementation easier. The results analysis also revealed different perceptions came from children from different ages and hedonic perceptions regarding the fish-based products had a greater weight in the percentage from older children. AP technique proved to be an important tool to understand the perception of fish by children, and strengthened the results previously obtained by the MP. In a second step it was evaluated the acceptance of fish burger (tilapia) in school meals. For this task, the school cooks were trained to prepare the hamburgers. For the evaluation of acceptance, the hedonic scale was used with 5 facial ratings (1 = disliked very much to 5 = liked a lot). Students from both genders, between 5 to 10 years old (n = 142) proved the burgers at lunchtime, representing the protein portion of the meal. The tilapia derivative products shown to be foods with important nutritional value and low calorie value. For the application of the multinomial logistic regression analysis there was no significant effect from the age and gender variation in the acceptance by children. However, statistical significance was determined in the interaction between these two variables. With 87 % acceptance rate there was potential for consumption of fish burgers in school meals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O objetivo desta tese foi avaliar a dinâmica do fósforo em cultivo heterotrófico e produção de compostos celulares por Aphanothece microscopica Nägeli visando avaliar a perspectiva de implementação de uma biorrefinaria microalgal. Desta forma, foi avaliado o comportamento do micro-organismo em estudo no cultivo heterotrófico, utilizando como meio de cultivo o efluente de laticínios. O trabalho foi desenvolvido em 3 etapas. Em um primeiro momento foi avaliada a influência da temperatura (20 e 30°C) e os valores máximos e mínimos de nutrientes, em especial do fósforo dissolvido reativo (PDR), disponíveis no efluente de laticínio, na remoção de nutrientes. Os resultados demonstraram que a concentração inicial de fósforo dissolvido e a temperatura exerceram influência no crescimento celular e na eficiência de remoção de nutrientes. Em termos de otimização de processo os cultivos conduzidos a 20°C e maiores concentrações de PDR (5,5 mg.L-1 ) no efluente de laticínio, foram os mais eficientes na conversão de poluentes em biomassa e remoção de nutrientes. A segunda etapa foi desenvolvida com o objetivo de avaliar a dinâmica de distribuição de fósforo na fase líquida e sólida do reator heterotrófico, quando o efluente de laticínio foi tratado pela Aphanothece microscopica Nägeli, a 20°C e nas máximas concentrações de fósforo dissolvido encontradas no efluente. Foi demonstrado que as formas fosforadas na fase líquida do reator se caracterizam pela predominância da fração dissolvida em comparação à particulada e por apresentar como fração predominante a de fósforo orgânico. No que se refere à fase sólida, ficou demonstrado que a Aphanothece microscopica Nägeli, quando cultivada heterotroficamente apresenta 3,8 vezes mais fósforo que o requerido para o crescimento celular. Ficando demonstrado ainda que a remoção biológica de fósforo por Aphanothece microscopica Nägeli pode resultar em substanciais aportes financeiros para as estações de tratamento de efluentes. Uma terceira etapa foi desenvolvida, a qual teve como objetivo avaliar a estimativa de produção de compostos celulares por Aphanothece microscopica Nägeli, a partir do efluente de laticínio, bem como o efeito da redução de temperatura de cultivo no teor de lipídios , no momento em que é obtida a máxima concentração deste componente celular, nas condições otimizadas.Foi obtido na fase logarítmica de crescimento, concentrações de 41,8 % de proteinas, carboidratos 28,5 %, lipídios 10,4 % e minerais 10,8 %. O maior teor de lipídio registrado a 20°C correspondeu a biomassa analisada na fase logarítmica.Com a redução da temperatura para 5°C por um período de 30 h é possível obter concentrações de lipídios 2,4 vezes superior ao registrado na fase logarítmica a 20 °C. No entanto, não foram indicadas diferenças significativas (p≤0,05) em função da temperatura entre as concentrações de lipídios obtidas para a biomassa a 10°C em 40 h. O perfil de ácidos graxos da biomassa gerada a 20°C, apresentou como ácidos graxos majoritários, os ácidos graxos: palmítico, oléico, γ-linolênico, palmitoleico e esteárico, resultando um aumento na concentração de ácidos graxos saturados as espensa dos insaturados, quando a temperatura é reduzida. Em paralelo,um reator heterotrófico descontinuo foi definido, ficando demonstrado que a extrapolação da operação em batelada para contínua requer um biorreator heterotrófico com volume útil de trabalho de 240,51 m 3 , permitindo tratar 950 m3 diários de efluente, gerando 11,8 kg.d-1 de biomassa útil para produção de compostos celulares por Aphanothece microscopica Nägeli, visando à simultânea remoção de matéria orgânica, nitrogênio total e fósforo total, gerando insumos que podem suportar a implementação de uma biorrefinaria microalgal.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For derived flood frequency analysis based on hydrological modelling long continuous precipitation time series with high temporal resolution are needed. Often, the observation network with recording rainfall gauges is poor, especially regarding the limited length of the available rainfall time series. Stochastic precipitation synthesis is a good alternative either to extend or to regionalise rainfall series to provide adequate input for long-term rainfall-runoff modelling with subsequent estimation of design floods. Here, a new two step procedure for stochastic synthesis of continuous hourly space-time rainfall is proposed and tested for the extension of short observed precipitation time series. First, a single-site alternating renewal model is presented to simulate independent hourly precipitation time series for several locations. The alternating renewal model describes wet spell durations, dry spell durations and wet spell intensities using univariate frequency distributions separately for two seasons. The dependence between wet spell intensity and duration is accounted for by 2-copulas. For disaggregation of the wet spells into hourly intensities a predefined profile is used. In the second step a multi-site resampling procedure is applied on the synthetic point rainfall event series to reproduce the spatial dependence structure of rainfall. Resampling is carried out successively on all synthetic event series using simulated annealing with an objective function considering three bivariate spatial rainfall characteristics. In a case study synthetic precipitation is generated for some locations with short observation records in two mesoscale catchments of the Bode river basin located in northern Germany. The synthetic rainfall data are then applied for derived flood frequency analysis using the hydrological model HEC-HMS. The results show good performance in reproducing average and extreme rainfall characteristics as well as in reproducing observed flood frequencies. The presented model has the potential to be used for ungauged locations through regionalisation of the model parameters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis investigates how web search evaluation can be improved using historical interaction data. Modern search engines combine offline and online evaluation approaches in a sequence of steps that a tested change needs to pass through to be accepted as an improvement and subsequently deployed. We refer to such a sequence of steps as an evaluation pipeline. In this thesis, we consider the evaluation pipeline to contain three sequential steps: an offline evaluation step, an online evaluation scheduling step, and an online evaluation step. In this thesis we show that historical user interaction data can aid in improving the accuracy or efficiency of each of the steps of the web search evaluation pipeline. As a result of these improvements, the overall efficiency of the entire evaluation pipeline is increased. Firstly, we investigate how user interaction data can be used to build accurate offline evaluation methods for query auto-completion mechanisms. We propose a family of offline evaluation metrics for query auto-completion that represents the effort the user has to spend in order to submit their query. The parameters of our proposed metrics are trained against a set of user interactions recorded in the search engine’s query logs. From our experimental study, we observe that our proposed metrics are significantly more correlated with an online user satisfaction indicator than the metrics proposed in the existing literature. Hence, fewer changes will pass the offline evaluation step to be rejected after the online evaluation step. As a result, this would allow us to achieve a higher efficiency of the entire evaluation pipeline. Secondly, we state the problem of the optimised scheduling of online experiments. We tackle this problem by considering a greedy scheduler that prioritises the evaluation queue according to the predicted likelihood of success of a particular experiment. This predictor is trained on a set of online experiments, and uses a diverse set of features to represent an online experiment. Our study demonstrates that a higher number of successful experiments per unit of time can be achieved by deploying such a scheduler on the second step of the evaluation pipeline. Consequently, we argue that the efficiency of the evaluation pipeline can be increased. Next, to improve the efficiency of the online evaluation step, we propose the Generalised Team Draft interleaving framework. Generalised Team Draft considers both the interleaving policy (how often a particular combination of results is shown) and click scoring (how important each click is) as parameters in a data-driven optimisation of the interleaving sensitivity. Further, Generalised Team Draft is applicable beyond domains with a list-based representation of results, i.e. in domains with a grid-based representation, such as image search. Our study using datasets of interleaving experiments performed both in document and image search domains demonstrates that Generalised Team Draft achieves the highest sensitivity. A higher sensitivity indicates that the interleaving experiments can be deployed for a shorter period of time or use a smaller sample of users. Importantly, Generalised Team Draft optimises the interleaving parameters w.r.t. historical interaction data recorded in the interleaving experiments. Finally, we propose to apply the sequential testing methods to reduce the mean deployment time for the interleaving experiments. We adapt two sequential tests for the interleaving experimentation. We demonstrate that one can achieve a significant decrease in experiment duration by using such sequential testing methods. The highest efficiency is achieved by the sequential tests that adjust their stopping thresholds using historical interaction data recorded in diagnostic experiments. Our further experimental study demonstrates that cumulative gains in the online experimentation efficiency can be achieved by combining the interleaving sensitivity optimisation approaches, including Generalised Team Draft, and the sequential testing approaches. Overall, the central contributions of this thesis are the proposed approaches to improve the accuracy or efficiency of the steps of the evaluation pipeline: the offline evaluation frameworks for the query auto-completion, an approach for the optimised scheduling of online experiments, a general framework for the efficient online interleaving evaluation, and a sequential testing approach for the online search evaluation. The experiments in this thesis are based on massive real-life datasets obtained from Yandex, a leading commercial search engine. These experiments demonstrate the potential of the proposed approaches to improve the efficiency of the evaluation pipeline.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents the determination of a mean solar radiation year and of a typical meteorological year for the region of Funchal in the Madeira Island, Portugal. The data set includes hourly mean and extreme values for air temperature, relative humidity and wind speed and hourly mean values for solar global and diffuse radiation for the period 2004-2014, with maximum data coverage of 99.7%. The determination of the mean solar radiation year consisted, in a first step, in the average of all values for each pair hour/day and, in a second step, in the application of a five days centred moving average of hourly values. The determination of the typical meteorological year was based on Finkelstein-Schafer statistics, which allows to obtain a complete year of real measurements through the selection and combination of typical months, preserving the long term averages while still allowing the analysis of short term events. The typical meteorological year validation was carried out through the comparison of the monthly averages for the typical year with the long term monthly averages. The values obtained were very close, so that the typical meteorological year can accurately represent the long term data series. The typical meteorological year can be used in the simulation of renewable energy systems, namely solar energy systems, and for predicting the energy performance of buildings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cada vez mais, os principais objetivos na indústria é a produção a baixo custo, com a máxima qualidade e com o tempo de fabrico o mais curto possível. Para atingir esta meta, a indústria recorre, frequentemente, às máquinas de comando numérico (CNC), uma vez que com esta tecnologia torna se capaz alcançar uma elevada precisão e um tempo de processamento mais baixo. As máquinas ferramentas CNC podem ser aplicadas em diferentes processos de maquinagem, tais como: torneamento, fresagem, furação, entre outros. De todos estes processos, o mais utilizado é a fresagem devido à sua versatilidade. Utiliza-se normalmente este processo para maquinar materiais metálicos como é o caso do aço e dos ferros fundidos. Neste trabalho, são analisados os efeitos da variação de quatro parâmetros no processo de fresagem (velocidade de corte, velocidade de avanço, penetração radial e penetração axial), individualmente e a interação entre alguns deles, na variação da rugosidade num aço endurecido (aço 12738). Para essa análise são utilizados dois métodos de otimização: o método de Taguchi e o método das superfícies. O primeiro método foi utilizado para diminuir o número de combinações possíveis e, consequentemente, o número de ensaios a realizar é denominado por método de Taguchi. O método das superfícies ou método das superfícies de resposta (RSM) foi utilizado com o intuito de comparar os resultados obtidos com o método de Taguchi, de acordo com alguns trabalhos referidos na bibliografia especializada, o RSM converge mais rapidamente para um valor ótimo. O método de Taguchi é muito conhecido no setor industrial onde é utilizado para o controlo de qualidade. Apresenta conceitos interessantes, tais como robustez e perda de qualidade, sendo bastante útil para identificar variações do sistema de produção, durante o processo industrial, quantificando a variação e permitindo eliminar os fatores indesejáveis. Com este método foi vi construída uma matriz ortogonal L16 e para cada parâmetro foram definidos dois níveis diferentes e realizados dezasseis ensaios. Após cada ensaio, faz-se a medição superficial da rugosidade da peça. Com base nos resultados obtidos das medições da rugosidade é feito um tratamento estatístico dos dados através da análise de variância (Anova) a fim de determinar a influência de cada um dos parâmetros na rugosidade superficial. Verificou-se que a rugosidade mínima medida foi de 1,05m. Neste estudo foi também determinada a contribuição de cada um dos parâmetros de maquinagem e a sua interação. A análise dos valores de “F-ratio” (Anova) revela que os fatores mais importantes são a profundidade de corte radial e da interação entre profundidade de corte radial e profundidade de corte axial para minimizar a rugosidade da superfície. Estes têm contribuições de cerca de 30% e 24%, respetivamente. Numa segunda etapa este mesmo estudo foi realizado pelo método das superfícies, a fim de comparar os resultados por estes dois métodos e verificar qual o melhor método de otimização para minimizar a rugosidade. A metodologia das superfícies de resposta é baseada num conjunto de técnicas matemáticas e estatísticas úteis para modelar e analisar problemas em que a resposta de interesse é influenciada por diversas variáveis e cujo objetivo é otimizar essa resposta. Para este método apenas foram realizados cinco ensaios, ao contrário de Taguchi, uma vez que apenas em cinco ensaios consegue-se valores de rugosidade mais baixos do que a média da rugosidade no método de Taguchi. O valor mais baixo por este método foi de 1,03μm. Assim, conclui-se que RSM é um método de otimização mais adequado do que Taguchi para os ensaios realizados. Foram obtidos melhores resultados num menor número de ensaios, o que implica menos desgaste da ferramenta, menor tempo de processamento e uma redução significativa do material utilizado.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Les délinquants sexuels sadiques sont généralement décrits comme une entité clinique particulière commettant des délits graves. Or, la notion même de sadisme sexuel pose un nombre important de problèmes. Parmi ceux-ci, on retrouve des problèmes de validité et de fidélité. Perçu comme une maladie dont on est atteint ou pas, le sadisme a été étudié comme si les sadiques étaient fondamentalement différents. À l’heure actuelle, plusieurs travaux laissent croire que la majorité des troubles psychologiques se présentent comme une différence d'intensité (dimension) plutôt qu’une différence de nature (taxon). Même si la conception médicale prévaut encore en ce qui concerne le sadisme sexuel, plusieurs évoquent l’idée qu’il pourrait être mieux conceptualisé à l’aide d’une approche dimensionnelle. En parallèle, nos connaissances sur les facteurs contributifs au développement du sadisme sexuel sont limitées et reposent sur de faibles appuis empiriques. Jusqu'à présent, très peu d'études se sont intéressées aux facteurs menant au développement du sadisme sexuel et encore moins ont tenté de valider leurs théories. En outre, nos connaissances proviennent majoritairement d'études de cas portant sur les meurtriers sexuels, un sous-groupe très particulier de délinquants fréquemment motivé par des intérêts sexuels sadiques. À notre connaissance, aucune étude n'a proposé jusqu'à présent de modèle développemental portant spécifiquement sur le sadisme sexuel. Pourtant, l'identification de facteurs liés au développement du sadisme sexuel est essentielle dans notre compréhension ainsi que dans l'élaboration de stratégie d'intervention efficace. La présente thèse s'inscrit dans un contexte visant à clarifier le concept de sadisme sexuel. Plus spécialement, nous nous intéressons à sa structure latente, à sa mesure et à ses origines développementales. À partir d'un échantillon de 514 délinquants sexuels évalué au Massachusetts Treatment Center, la viabilité d’une conception dimensionnelle du sadisme sexuel sera mise à l’épreuve à l'aide d'analyses taxométriques permettant d'étudier la structure latente d'un construit. Dans une seconde étape, à l'aide d'analyses de Rasch et d'analyses appartenant aux théories de la réponse à l'item à deux paramètres, nous développerons la MTC Sadism Scale (MTCSS), une mesure dimensionnelle du sadisme sexuel. Dans une troisième et dernière étape, un modèle développemental sera élaboré à l'aide d'équations structurales. La présente thèse permettra de contribuer à la clarification du concept de sadisme sexuel. Une clarification de la structure latente et des facteurs développementaux permettra de saisir les devis de recherche les plus à même de capturer les aspects essentiels. En outre, ceci permettra d'identifier les facteurs pour lesquels une intervention est la plus appropriée pour réduire la récidive, ou la gravité de celle-ci.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We describe and evaluate two reduced models for nonlinear chemical reactions in a chaotic laminar flow. Each model involves two separate steps to compute the chemical composition at a given location and time. The “manifold tracking model” first tracks backwards in time a segment of the stable manifold of the requisite point. This then provides a sample of the initial conditions appropriate for the second step, which requires solving one-dimensional problems for the reaction in Lagrangian coordinates. By contrast, the first step of the “branching trajectories model” simulates both the advection and diffusion of fluid particles that terminate at the appropriate point; the chemical reaction equations are then solved along each of the branched trajectories in a second step. Results from each model are compared with full numerical simulations of the reaction processes in a chaotic laminar flow.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to analyze what transaction costs are acceptable for customers in different investments. In this study, two life insurance contracts, a mutual fund and a risk-free investment, as alternative investment forms are considered. The first two products under scrutiny are a life insurance investment with a point-to-point capital guarantee and a participating contract with an annual interest rate guarantee and participation in the insurer's surplus. The policyholder assesses the various investment opportunities using different utility measures. For selected types of risk profiles, the utility position and the investor's preference for the various investments are assessed. Based on this analysis, the authors study which cost levels can make all of the products equally rewarding for the investor. Design/methodology/approach - The paper notes the risk-neutral valuation calibration using empirical data utility and performance measurement dynamics underlying: geometric Brownian motion numerical examples via Monte Carlo simulation. Findings - In the first step, the financial performance of the various saving opportunities under different assumptions of the investor's utility measurement is studied. In the second step, the authors calculate the level of transaction costs that are allowed in the various products to make all of the investment opportunities equally rewarding from the investor's point of view. A comparison of these results with transaction costs that are common in the market shows that insurance companies must be careful with respect to the level of transaction costs that they pass on to their customers to provide attractive payoff distributions. Originality/value - To the best of the authors' knowledge, their research question - i.e. which transaction costs for life insurance products would be acceptable from the customer's point of view - has not been studied in the above described context so far.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O presente artigo objetiva divulgar um incipiente recomendações sobre orçamento/financiamento das políticas públicas no Brasil e suas possibilidades de promoção da equidade de sexo/gênero e raça/etnia. Na primeira parte, sistematiza os achados de pesquisa quanto a e equidade de sexo/gênero e raça/etnia. Em um segundo momento, o texto apresenta dados inéditos sobre o financiamento tributário do orçamentário público com recorte de gênero e raça, demonstrando que o maior peso mulheres, principalmente as negras. A terceira seção enfatiza o orçamento público como uma escolha política do Estado e apresenta os Programas Selecionados (Raça, e Gênero) no Plano Plurianual (PPA) 2008-2011. __________________________________________________________________________________________________ ABSTRACT

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The literature on preferences for redistribution has paid little attention to the effect of social mobility on the demand for redistribution, which is in contrast with the literature on class-voting, where studies on the effect of social mobility has been very common. Some works have addressed this issue but no systematic test of the hypotheses connecting social mobility and preferences has been done. In this paper we use the diagonal reference models to estimate the effect of origin and destination class on preferences for redistribution in a sample of European countries using data from the European Social Survey. Our findings indicate that social origin matters to a little extent to explain preferences, as newcomers tend to adopt the preferences of the destination class. Moreover, we have found only limited evidence supporting the acculturation hypothesis and not support for the status maximization hypothesis. Furthermore, the effect of social origin varies largely between countries. In a second step of the analysis we investigate what are the national factors explaining this variation. The empirical evidence we present leads to conclude that high rates of upward social mobility sharply reduce the effect of social origin on preferences for redistribution

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Incidence of jaundice is high in newborn infants. Since well appearing newborns are rapidly and routinely discharged from hospital, performing an inexpensive noninvasive pre-discharge screening test for evaluation of jaundice seems to be necessary. Objectives: This study was conducted to compare the accuracy of cutaneous v/s serum bilirubin measurements in this regard. Patients and Methods: This was a prospective cross sectional study conducted in Mahdieh hospital, Tehran. 613 neonates weighing ≥ 1,800 g with gestational age of ≥ 35 weeks were enrolled. A pre discharge transcutaneous bilirubin test (TcB) was performed in all. Serum samples were taken from neonates with TcB ≥ 5 mg/dL in first and > 8 mg/dL in second 24 hours. Decision for treatment or recheck of bilirubin level after discharge was made based on serum bilirubin results. Results: Based on the study protocol, among 613 studied neonates, 491 (80%) revealed high TcB, of them 240 (49%) cases showed TBC ≥ 5 mg/dL in first and 251 (51 %) in second pre-discharge 24 hours. TcB ranged 3.3 - 17.1, mean TcB in first 24 hours was 6.9 ± 1 .7 (mode 6) and in second 24 hours 9.1 ± 2.1 (mode 10). Of 491 neonates with high TcB, capillary serum sample was taken as the second step and 398 neonates revealed high total serum bilirubin (TsB) with the same protocol for TcB. 108 (27.1%) neonates showed TsB ≥ 5 mg/dL in first and 290 (72.9%) in second 24 hours. According to the study results TcB has a 81% positive predictive value (PPV) in diagnosis of hyperbilirubinemia. Correlation coefficient of TcB and TsB in highest rate is equal to 72% (P value < 0.001). Conclusions: TcB is an inexpensive, noninvasive and precise pre-discharge screening test for evaluation of hyperbilirubinemia, with a high PPV. It is highly recommended to be performed routinely due to high incidence of hyperbilirubinemia in neonates.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Prenatal hydronephrosis (PNH) is dilation in urinary collecting system and is the most frequent neonatal urinary tract abnormality with an incidence of 1% to 5% of all pregnancies. PNH is defined as anteroposterior diameter (APD) of renal pelvis ≥ 4 mm at gestational age (GA) of < 33 weeks and APD ≥ 7 mm at GA of ≥ 33 weeks to 2 months after birth. All patients need to be evaluated after birth by postnatal renal ultrasonography (US). In the vast majority of cases, watchful waiting is the only thing to do; others need medical or surgical therapy. Objectives: There is a direct relationship between APD of renal pelvis and outcome of PNH. Therefore we were to find the best cutoff point APD of renal pelvis which leads to surgical outcome. Patients and Methods: In this retrospective cohort study we followed 200 patients 1 to 60 days old with diagnosis of PNH based on before or after birth ultrasonography; as a prenatal or postnatal detected, respectively. These patients were referred to the nephrology clinic in Zahedan Iran during 2011 to 2013. The first step of investigation was a postnatal renal US, by the same expert radiologist and classifying the patients into 3 groups; normal, mild/moderate and severe. The second step was to perform voiding cystourethrogram (VCUG) for mild/moderate to severe cases at 4 - 6 weeks of life. Tc-diethylene triamine-pentaacetic acid (DTPA) was the last step and for those with normal VCUG who did not show improvement in follow-up examination, US to evaluate obstruction and renal function. Finally all patients with mild/moderate to severe PNH received conservative therapy and surgery was preserved only for progressive cases, obstruction or renal function ≤35%. All patients’ data and radiologic information was recorded in separate data forms, and then analyzed by SPSS (version 22). Results: 200 screened PNH patients with male to female ratio 3.5:1 underwent first postnatal control US, of whom 65% had normal, 18% mild/moderate and 17% severe hydronephrosis. 167 patients had VCUG of whom 20.82% with VUR. 112 patients performed DTPA with following results: 50 patients had obstruction and 62 patients showed no obstructive finding. Finally 54% of 200 patients recovered by conservative therapy, 12.5% by surgery and remaining improved without any surgical intervention. Conclusions: The best cutoff point of anteroposterior renal pelvis diameter that led to surgery was 15 mm, with sensitivity 88% and specificity 74%.