934 resultados para Approximate filtering
Resumo:
RESUM Les exigències actuals de la professió de d’infermeria requereixen que la docència vagi orientada a interrelacionar els diferents rols a desenvolupar enla pràctica diària, per adquirir experiència en l’aprenentatge i així augmentar la qualitat de les cures d’infermeria. Per assolir aquest objectiu és important l’aprenentatge basat enproblemes. Aquest pretén en primer lloc que els estudiants aprenguin allò que permet desenvolupar-se enla vida professional de la manera més natural possible a partir d’una idea clara i profunda de l’evidència sobre la que s’ha d’actuar. Amb aquesta finalitat es vandissenyar casos clínics amb uns objectius que requerien la integració de coneixements, actituds i valors, en diferents fases a desenvolupar en un període de temps predeterminat. També ens vam proposar una estratègia docent que permetés a l’estudiant incorporar el coneixement científic que dóna suport a la pràctica assistencial per aproximar teoria i pràctica. Es pretén que els estudiants busquin una resposta basada en la millor evidència científica disponible, per prendre una decisió respecte a les cures del pacient. Els objectius de l’estudi són: Avaluar globalment l’aprenentatge basat en la simulació de casos Avaluar com els estudiants valoren la integració del model d’infermeria i del procés d’atenció en l’aprenentatge basat en la simulació de casos. Valorar les sensacions percebudes per l’estudiant durant la simulació del cas. Valorar l’actitud d el’estudiant en relació a la incorporació de l’evidència científica per una millora en la pràctica clínica. Avaluar el grau de dificultat manifestat per l’estudiant en relació al procés de documentació. Avaluar la idonietat de l’argumentació i la decisió de l’estudiant a la pregunta formulada en el cas clínic. Metodologia: L’assignatura d’Infermeria Medicoquirúrgica. Adult I del Departament d’Infermeria de la Universitat de Vic, va iniciar una experiència d’aprenentatge basat en la resolució de problemes, amb estudiants de 2on curs. Les professores responsables dels seminaris van realitzar una avaluació de l’experiència a través d’una enquesta. Aquesta es responia al cap d’un mes de la simulació al laboratori, quan es contrastaven els resultats obtinguts en aquesta entre professores i estudiants després de visualitzar la gravació feta durant el mateix. En el context del seminari de simulació de casos, es va introduir una pregunta/problema, a partir de la que els estudiants, en grup, havien de documentar-se amb el suport d’una guia. Per valorar l’actitud davant aquesta pregunta/problema es va dissenyar un qustionari tipus Likert. L’avaluació del grau de dificultat s’ha registrat a través d’unes escales de puntuació. Per a l’avaluació de la decisió presa, es van valorar les síntesis resum entregades en els treballs escrits pels diferents grups. Resultats: La realització de la simulació en el laboratori va ser avaluada per un alt percentatge d’estudiants (68,8%) amb puntuacions entre 6 i 8 mentre que un 26,6% la van situar en tre 9 i 10, només un 4,7 % la van puntuar amb 5. La integració del model d’infermeria va ser valorada pel 86% amb una puntuació entre 7 i 10. La valoració global de la simulació va ser qualificada pels estudiants amb una puntuació de 8 (34,4%) seguida d’un 28,1% amb una consideració de 7. Un 7,2% van puntuar entre 9 i 10. El 93,3% van assegurar que conèixer les fonts documentals els serviria per millorar l’assistència, el 86,7% esperen obtenir arguments sòlids respecte les seves desicions si la documentació consultada és de qualitat. Un 77,8% dels estudiants consideren estar més satisfets al saber incorporar la presa de decisions basada en evidències. Respecte el grau de dificultat en el procés de documentació la dificultat més gran la presenten en com buscar en les bases de dades de referències bibliogràfiques. Conclusions: L’aprenentatge dels estudiants a través de la simulació de casos és una estratègia vàlida que l’estudiant valora positivament al mateix temps que permet desenvolupar habilitats per a la pràctica professional. L’estratègia docent dissenyada per integrar les evidències en la presa de decisions es considera positiva, no obstant, després d’analitzar els resultats, s’han de modificar alguns aspectes per a la seva millora; tutoritzar per a millorar el procés de documentació i incidir més en la crítica i reflexió, de manera que les troballes de la investigació siguin canalitzades cap a la pràctica.
Resumo:
Critically ill patients depend on artificial nutrition for the maintenance of their metabolic functions and lean body mass, as well as for limiting underfeeding-related complications. Current guidelines recommend enteral nutrition (EN), possibly within the first 48 hours, as the best way to provide the nutrients and prevent infections. EN may be difficult to realize or may be contraindicated in some patients, such as those presenting anatomic intestinal continuity problems or splanchnic ischemia. A series of contradictory trials regarding the best route and timing for feeding have left the medical community with great uncertainty regarding the place of parenteral nutrition (PN) in critically ill patients. Many of the deleterious effects attributed to PN result from inadequate indications, or from overfeeding. The latter is due firstly to the easier delivery of nutrients by PN compared with EN increasing the risk of overfeeding, and secondly to the use of approximate energy targets, generally based on predictive equations: these equations are static and inaccurate in about 70% of patients. Such high uncertainty about requirements compromises attempts at conducting nutrition trials without indirect calorimetry support because the results cannot be trusted; indeed, both underfeeding and overfeeding are equally deleterious. An individualized therapy is required. A pragmatic approach to feeding is proposed: at first to attempt EN whenever and as early as possible, then to use indirect calorimetry if available, and to monitor delivery and response to feeding, and finally to consider the option of combining EN with PN in case of insufficient EN from day 4 onwards.
Resumo:
Less is known about social welfare objectives when it is costly to change prices, as in Rotemberg (1982), compared with Calvo-type models. We derive a quadratic approximate welfare function around a distorted steady state for the costly price adjustment model. We highlight the similarities and differences to the Calvo setup. Both models imply inflation and output stabilization goals. It is explained why the degree of distortion in the economy influences inflation aversion in the Rotemberg framework in a way that differs from the Calvo setup.
Resumo:
We propose a non-equidistant Q rate matrix formula and an adaptive numerical algorithm for a continuous time Markov chain to approximate jump-diffusions with affine or non-affine functional specifications. Our approach also accommodates state-dependent jump intensity and jump distribution, a flexibility that is very hard to achieve with other numerical methods. The Kolmogorov-Smirnov test shows that the proposed Markov chain transition density converges to the one given by the likelihood expansion formula as in Ait-Sahalia (2008). We provide numerical examples for European stock option pricing in Black and Scholes (1973), Merton (1976) and Kou (2002).
Resumo:
Human immunodeficiency virus type 1 (HIV-1) elite controllers maintain undetectable levels of viral replication in the absence of antiretroviral therapy (ART), but their underlying immunological and virological characteristics may vary. Here, we used a whole-genome transcriptional profiling approach to characterize gene expression signatures of CD4 T cells from an unselected cohort of elite controllers. The transcriptional profiles for the majority of elite controllers were similar to those of ART-treated patients but different from those of HIV-1-negative persons. Yet, a smaller proportion of elite controllers showed an alternative gene expression pattern that was indistinguishable from that of HIV-1-negative persons but different from that of highly active antiretroviral therapy (HAART)-treated individuals. Elite controllers with the latter gene expression signature had significantly higher CD4 T cell counts and lower levels of HIV-1-specific CD8(+) T cell responses but did not significantly differ from other elite controllers in terms of HLA class I alleles, HIV-1 viral loads determined by ultrasensitive single-copy PCR assays, or chemokine receptor polymorphisms. Thus, these data identify a specific subgroup of elite controllers whose immunological and gene expression characteristics approximate those of HIV-1-negative persons.
Resumo:
In this paper we study a behavioral model of conflict that provides a basis for choosing certain indices of dispersion as indicators for conflict. We show that the (equilibrium) level of conflict can be expressed as an (approximate) linear function of the Gini coefficient, the Herfindahl-Hirschman fractionalization index, and a specific measure of polarization due to Esteban and Ray
Resumo:
This paper develops a new test of true versus spurious long memory, based on log-periodogram estimation of the long memory parameter using skip-sampled data. A correction factor is derived to overcome the bias in this estimator due to aliasing. The procedure is designed to be used in the context of a conventional test of significance of the long memory parameter, and composite test procedure described that has the properties of known asymptotic size and consistency. The test is implemented using the bootstrap, with the distribution under the null hypothesis being approximated using a dependent-sample bootstrap technique to approximate short-run dependence following fractional differencing. The properties of the test are investigated in a set of Monte Carlo experiments. The procedure is illustrated by applications to exchange rate volatility and dividend growth series.
Resumo:
Acid mine drainage (AMD) from the Zn-Pb(-Ag-Bi-Cu) deposit of Cerro de Pasco (Central Peru) and waste water from a Cu-extraction plant has been discharged since 1981 into Lake Yanamate, a natural lake with carbonate bedrock. The lake has developed a highly acidic pH of similar to 1. Mean lake water chemistry was characterized by 16,775 mg/L acidity as CaCO(3), 4330 mg/L Fe and 29,250 mg/L SO(4). Mean trace element concentrations were 86.8 mg/L Cu, 493 mg/L Zn, 2.9 mg/L Pb and 48 mg/L As, which did not differ greatly from the discharged AMD. Most elements showed increasing concentrations from the surface to the lake bottom at a maximal depth of 41 m (e.g. from 3581 to 5433 mg/L Fe and 25,609 to 35,959 mg/L SO(4)). The variations in the H and 0 isotope compositions and the element concentrations within the upper 10 m of the water column suggest mixing with recently discharged AMD, shallow groundwater and precipitation waters. Below 15 m a stagnant zone had developed. Gypsum (saturation index, SI similar to 0.25) and anglesite (SI similar to 0.1) were in equilibrium with lake water. Jarosite was oversaturated (SI similar to 1.7) in the upper part of the water column, resulting in downward settling and re-dissolution in the lower part of the water column (SI similar to -0.7). Accordingly, jarosite was only found in sediments from less than 7 m water depth. At the lake bottom, a layer of gel-like material (similar to 90 wt.% water) of pH similar to 1 with a total organic C content of up to 4.40 wet wt.% originated from the kerosene discharge of the Cu-extraction plant and had contaminant element concentrations similar to the lake water. Below the organic layer followed a layer of gypsum with pH 1.5, which overlaid the dissolving carbonate sediments of pH 5.3-7. In these two layers the contaminant elements were enriched compared to lake water in the sequence As < Pb approximate to Cu < Cd < Zn = Mn with increasing depth. This sequence of enrichment was explained by the following processes: (i) adsorption of As on Fe-hydroxides coating plant roots at low pH (up to 3326 mg/kg As), (ii) adsorption at increasing pH near the gypsum/calcite boundary (up to 1812 mg/kg Pb, 2531 mg/kg Cu. and 36 mg/kg Cd), and (iii) precipitation of carbonates (up to 5177 mg/kg Zn and 810 mg/kg Mn: all data corrected to a wet base). The infiltration rate was approximately equal to the discharge rate, thus gypsum and hydroxide precipitation had not resulted in complete clogging of the lake bedrocks. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A large influenza epidemic took place in Havana during the winter of 1988. The epidemiologic surveillance unit of the Pedro Kouri Institute of Tropical Medicine detected the begining of the epidemic wave. The Rvachev-Baroyan mathematical model of the geographic spread of an epidemic was used to forecast this epidemic under routine conditions of the public health system. The expected number of individuals who would attend outpatient services, because of influenza-like illness, was calculated and communicated to the health authorities within enough time to permit the introduction of available control measures. The approximate date of the epidemic peak, the daily expected number of individuals attending medical services, and the approximate time of the end of the epidemic wave were estimated. The prediction error was 12%. The model was sufficienty accurate to warrant its use as a pratical forecasting tool in the Cuban public health system.
Resumo:
Limiting dilution analysis was used to quantify Trypanosoma cruzi in the lymph nodes, liver and heart of Swiss and C57 B1/10 mice. The results showed that, in Swiss and B1/10 mice infected with T. cruzi Y strain, the number of parasites/mg of tissue increased during the course of the infection in both types of mice, although a grater number of parasites were observed in heart tissue from Swiss mice than from B1/10. With regard to liver tissue, it was observed that the parasite load in the initial phase of infection was higher than in heart. In experiments using T. cruzi Colombian strain, the parasite load in the heart of Swiss and B1/10 mice increased relatively slowly, although high levels of parasitization were nonetheless observable by the end of the infection. As for the liver and lymph nodes, the concentration of parasites was lower over the entire course of infection than in heart. Both strains thus maintained their characteristic tissue tropisms. The limiting dilution assay (LDA) proved to be an appropriate method for more precise quantification of T. cruzi, comparing favorably with other direct microscopic methods that only give approximate scores.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
The ability to model biodiversity patterns is of prime importance in this era of severe environmental crisis. Species assemblage along environmental gradient is subject to the interplay of biotic interactions in complement to abiotic environmental filtering. Accounting for complex biotic interactions for a wide array of species remains so far challenging. Here, we propose to use food web models that can infer the potential interaction links between species as a constraint in species distribution models. Using a plant-herbivore (butterfly) interaction dataset, we demonstrate that this combined approach is able to improve both species distribution and community forecasts. Most importantly, this combined approach is very useful in rendering models of more generalist species that have multiple potential interaction links, where gap in the literature may be recurrent. Our combined approach points a promising direction forward to model the spatial variation of entire species interaction networks. Our work has implications for studies of range shifting species and invasive species biology where it may be unknown how a given biota might interact with a potential invader or in future climate.
Resumo:
When using a polynomial approximating function the most contentious aspect of the Heat Balance Integral Method is the choice of power of the highest order term. In this paper we employ a method recently developed for thermal problems, where the exponent is determined during the solution process, to analyse Stefan problems. This is achieved by minimising an error function. The solution requires no knowledge of an exact solution and generally produces significantly better results than all previous HBI models. The method is illustrated by first applying it to standard thermal problems. A Stefan problem with an analytical solution is then discussed and results compared to the approximate solution. An ablation problem is also analysed and results compared against a numerical solution. In both examples the agreement is excellent. A Stefan problem where the boundary temperature increases exponentially is analysed. This highlights the difficulties that can be encountered with a time dependent boundary condition. Finally, melting with a time-dependent flux is briefly analysed without applying analytical or numerical results to assess the accuracy.
Resumo:
Piped water is used to remove hydration heat from concrete blocks during construction. In this paper we develop an approximate model for this process. The problem reduces to solving a one-dimensional heat equation in the concrete, coupled with a first order differential equation for the water temperature. Numerical results are presented and the effect of varying model parameters shown. An analytical solution is also provided for a steady-state constant heat generationmodel. This helps highlight the dependence on certain parameters and can therefore provide an aid in the design of cooling systems.
Application of standard and refined heat balance integral methods to one-dimensional Stefan problems
Resumo:
The work in this paper concerns the study of conventional and refined heat balance integral methods for a number of phase change problems. These include standard test problems, both with one and two phase changes, which have exact solutions to enable us to test the accuracy of the approximate solutions. We also consider situations where no analytical solution is available and compare these to numerical solutions. It is popular to use a quadratic profile as an approximation of the temperature, but we show that a cubic profile, seldom considered in the literature, is far more accurate in most circumstances. In addition, the refined integral method can give greater improvement still and we develop a variation on this method which turns out to be optimal in some cases. We assess which integral method is better for various problems, showing that it is largely dependent on the specified boundary conditions.