928 resultados para Effective notch method
Resumo:
OBJECTIVE: To experimentally compare two classic techniques described for manual suture of the bronchial stump.METHODS: We used organs of pigs, with isolated trachea and lungs, preserved by refrigeration. We dissected 30 bronchi, which were divided into three groups of ten bronchi each, of 3mm, 5mm, and 7mm, respectively. In each, we performed the suture with simple, separated, extramucosal stitches in five other bronchi, and the technique proposed by Ramirez and modified by Santos et al in the other five. Once the sutures were finished, the anastomoses were tested using compressed air ventilation, applying an endotracheal pressure of 20mmHg.RESULTS: the Ramirez Gama suture was more effective in the bronchi of 3, 5 and 7 mm, and there was no air leak even after subjecting them to a tracheal pressure of 20mmHg. The simple interrupted sutures were less effective, with extravasation in six of the 15 tested bronchi, especially in the angles of the sutures. These figures were not significant (p = 0.08).CONCLUSION: manual sutures of the bronchial stumps were more effective when the modified Ramirez Gama suture was used in the caliber bronchi arms when tested with increased endotracheal pressure.
Resumo:
Abstract: Platelet-rich plasma (PRP) is a product easy and inxpesnsive, and stands out to for its growth factors in tissue repair. To obtain PRP, centrifugation of whole blood is made with specific time and gravitational forces. Thus, the present work aimed to study a method of double centrifugation to obtain PRP in order to evaluate the effective increase of platelet concentration in the final product, the preparation of PRP gel, and to optimize preparation time of the final sample. Fifteen female White New Zealand rabbits underwent blood sampling for the preparation of PRP. Samples were separated in two sterile tubes containing sodium citrate. Tubes were submitted to the double centrifugation protocol, with lid closed and 1600 revolutions per minute (rpm) for 10 minutes, resulting in the separation of red blood cells, plasma with platelets and leucocytes. After were opened and plasma was pipetted and transferred into another sterile tube. Plasma was centrifuged again at 2000rpm for 10 minutes; as a result it was split into two parts: on the top, consisting of platelet-poor plasma (PPP) and at the bottom of the platelet button. Part of the PPP was discarded so that only 1ml remained in the tube along with the platelet button. This material was gently agitated to promote platelets resuspension and activated when added 0.3ml of calcium gluconate, resulting in PRP gel. Double centrifugation protocol was able to make platelet concentration 3 times higher in relation to the initial blood sample. The volume of calcium gluconate used for platelet activation was 0.3ml, and was sufficient to coagulate the sample. Coagulation time ranged from 8 to 20 minutes, with an average of 17.6 minutes. Therefore, time of blood centrifugation until to obtain PRP gel took only 40 minutes. It was concluded that PRP was successfully obtained by double centrifugation protocol, which is able to increase the platelet concentration in the sample compared with whole blood, allowing its use in surgical procedures. Furthermore, the preparation time is appropriate to obtain PRP in just 40 minutes, and calcium gluconate is able to promote the activation of platelets.
Resumo:
This paper gives a detailed presentation of the Substitution-Newton-Raphson method, suitable for large sparse non-linear systems. It combines the Successive Substitution method and the Newton-Raphson method in such way as to take the best advantages of both, keeping the convergence features of the Newton-Raphson with the low requirements of memory and time of the Successive Substitution schemes. The large system is solved employing few effective variables, using the greatest possible part of the model equations in substitution fashion to fix the remaining variables, but maintaining the convergence characteristics of the Newton-Raphson. The methodology is exemplified through a simple algebraic system, and applied to a simple thermodynamic, mechanical and heat transfer modeling of a single-stage vapor compression refrigeration system. Three distinct approaches for reproducing the thermodynamic properties of the refrigerant R-134a are compared: the linear interpolation from tabulated data, the use of polynomial fitted curves and the use of functions derived from the Helmholtz free energy.
Resumo:
Interest towards working capital management increased among practitioners and researchers because the financial crisis of 2008 caused the deterioration of the general financial situation. The importance of managing working capital effectively increased dramatically during the financial crisis. On one hand, companies highlighted the importance of working capital management as part of short-term financial management to overcome funding difficulties. On the other hand, in academia, it has been highlighted the need to analyze working capital management from a wider perspective namely from the value chain perspective. Previously, academic articles mostly discussed working capital management from a company-centered perspective. The objective of this thesis was to put working capital management in a wider and more academic perspective and present case studies of the value chains of industries as instrumental in theoretical contributions and practical contributions as complementary to theoretical contributions and conclusions. The principal assumption of this thesis is that selffinancing of value chains can be established through effective working capital management. Thus, the thesis introduces the financial value chain analysis method which is employed in the empirical studies. The effectiveness of working capital management of the value chains is studied through the cycle time of working capital. The financial value chain analysis method employed in this study is designed for considering value chain level phenomena. This method provides a holistic picture of the value chain through financial figures. It extends the value chain analysis to the industry level. Working capital management is studied by the cash conversion cycle that measures the length (days) of time a company has funds tied up in working capital, starting from the payment of purchases to the supplier and ending when remittance of sales is received from the customers. The working capital management practices employed in the automotive, pulp and paper and information and communication technology industries have been studied in this research project. Additionally, the Finnish pharmaceutical industry is studied to obtain a deeper understanding of the working capital management of the value chain. The results indicate that the cycle time of working capital is constant in the value chain context over time. The cash conversion cycle of automotive, pulp and paper, and ICT industries are on average 70, 60 and 40 days, respectively. The difference is mainly a consequence of the different cycle time of inventories. The financial crisis of 2008 affected the working capital management of the industries similarly. Both the cycle time of accounts receivable and accounts payable increased between 2008 and 2009. The results suggest that the companies of the automotive, pulp and paper and ICT value chains were not able to self-finance. Results do not indicate the improvement of value chains position in regard to working capital management either. The findings suggest that companies operating in the Finnish pharmaceutical industry are interested in developing their own working capital management, but collaboration with the value chain partners is not considered interesting. Competition no longer occurs between individual companies, but between value chains. Therefore the financial value chain analysis method introduced in this thesis has the potential to support value chains in improving their competitiveness.
Resumo:
We report a fast (less than 3 h) and cost-effective melting temperature assay method for the detection of single-nucleotide polymorphisms in the MBL2 gene. The protocol, which is based on the Corbett Rotor Gene real time PCR platform and SYBR Green I chemistry, yielded, in the cohorts studied, sensitive (100%) and specific (100%) PCR amplification without the use of costly fluorophore-labeled probes or post-PCR manipulation. At the end of the PCR, the dissociation protocol included a slow heating from 60º to 95ºC in 0.2ºC steps, with an 8-s interval between steps. Melting curve profiles were obtained using the dissociation software of the Rotor Gene-3000 apparatus. Samples were analyzed in duplicate and in different PCR runs to test the reproducibility of this technique. No supplementary data handling is required to determine the MBL2 genotype. MBL2 genotyping performed on a cohort of 164 HIV-1-positive Brazilian children and 150 healthy controls, matched for age and sex and ethnic origin, yielded reproducible results confirmed by direct sequencing of the amplicon performed in blind. The three MBL2 variants (Arg52Cys, Gly54Asp, Gly57Glu) were grouped together and called allele 0, while the combination of three wild-type alleles was called allele A. The frequency of the A/A homozygotes was significantly higher among healthy controls (0.68) than in HIV-infected children (0.55; P = 0.0234) and the frequency of MBL2 0/0 homozygotes was higher among HIV-1-infected children than healthy controls (P = 0.0296). The 0 allele was significantly more frequent among the 164 HIV-1-infected children (0.29) than among the 150 healthy controls (0.18; P = 0.0032). Our data confirm the association between the presence of the mutated MBL2 allele (allele 0) and HIV-1 infection in perinatally exposed children. Our results are in agreement with the literature data which indicate that the presence of the allele 0 confers a relative risk of 1.37 for HIV-1 infection through vertical transmission.
Resumo:
The atrioventricular (AV) node is permanently damaged in approximately 3% of congenital heart surgery operations, requiring implantation of a permanent pacemaker. Improvements in pacemaker design and in alternative treatment modalities require an effective in vivo model of complete heart block (CHB) before testing can be performed in humans. Such a model should enable accurate, reliable, and detectable induction of the surgical pathology. Through our laboratory’s efforts in developing a tissue engineering therapy for CHB, we describe here an improved in vivo model for inducing chronic AV block. The method employs a right thoracotomy in the adult rabbit, from which the right atrial appendage may be retracted to expose an access channel for the AV node. A novel injection device was designed, which both physically restricts needle depth and provides electrical information via electrocardiogram interface. This combination of features provides real-time guidance to the researcher for confirming contact with the AV node, and documents its ablation upon formalin injection. While all animals tested could be induced to acute AV block, those with ECG guidance were more likely to maintain chronic heart block >12 h. Our model enables the researcher to reproduce both CHB and the associated peripheral fibrosis that would be present in an open congenital heart surgery, and which would inevitably impact the design and utility of a tissue engineered AV node replacement.
Resumo:
Tea has been considered a medicine and a healthy beverage since ancient times, but recently it has received a great deal of attention because of its antioxidant properties. Green tea polyphenols have demonstrated to be an effective chemopreventive agent. Recently, investigators have found that EGCG, one of the green tea catechins, could have anti-HIV effects when bound to CD4 receptor. Many factors can constitute important influences on the composition of tea, such as species, season, age of the leaf, climate, and horticultural practices (soil, water, minerals, fertilizers). This paper presents an HPLC analytical methodology development, using column RP-18 and mobile phase composed by water, acetonitrile, methanol, ethyl acetate, glacial acetic acid (89:6:1:3:1 v/v/v/v/v) for simultaneous determination and quantification of caffeine (CAF), catechin (C), epicatechin (EC) and epigallocatechin gallate (EGCG) in samples of Camellia sinensis (green tea) grown in Brazil and harvested in spring, in summer and in autumn, in comparison to Brazilian black tea, to samples of Japanese and Chinese green tea and to two standardized dry extracts of green tea. The method has been statistically evaluated and has proved to be adequate to qualitative and quantitative determination of the samples.
Resumo:
The accumulation of exopolysaccharides (EPS) produced by microorganisms occurs in the presence of excess substrate and limiting conditions of elements that are essential to growth, such as nitrogen, phosphorus, sulfur, and magnesium. The presence of EPS produced by bacterial cells contributes to slime colonies formation in solid medium and increased viscosity in liquid medium. This paper proposes an alternative method for screening EPS-producing lactic acid bacteria using solid medium-containing discs of filter paper that are saturated with active cultures. The screening was carried out under different culture conditions varying the type of sugar, pH, and temperature. EPS production was visualized by the presence of mucoid colonies on the discs, which was confirmed by the formation of a precipitate when part of this colony was mixed with absolute alcohol. The established conditions for obtaining a high number of isolates producing EPS were 10% sucrose, pH 7.5 and 28 ºC. This method proved to be effective and economical because several strains could be tested on the same plate, with immediate confirmation.
Resumo:
Hankintojen johtamisen kirjallisuus korostaa tehokkaan hankinnan olevan käypä keino tehostaa organisaation tulosta kokonaisvaltaisesti. Myös kasvava tietoisuus erityisesti epäsuorista hankintamenetelmistä ja työkaluista toimivat kannustimina tälle tutkimukselle. Tämän Pro Gradu -tutkimuksen päätarkoituksena on rakentaa kokonaisvaltainen ymmärrys epäsuorasta hankinnasta sekä löytää keinoja sen tehostamiseksi. Tutkimuksen tavoitteena on selvittää, miten globaali, monikansal- linen organisaatio voi parantaa kannattavuuttaan epäsuorissa hankinnoissa, sekä mitkä tekijät hankintastrategiassa vaikuttavat siihen. Tutkimus toteutettiin yksittäisenä tapaustutkimuksena suuren globaalin, monikan- sallisen yrityksen työntekijän näkökulmasta, Pääosa datasta pohjautuu vuonna 2015 toteutettuun Opportunity -analyysi projektiin, joka toteutettiin yhteistyössä ulkoisen konsulttifirman kanssa. Osa datasta pohjautuu puolistrukturoituihin haas- tatteluihin organisaation hankintajohtajan kanssa. Datan keruussa hyödynnettiin lisäksi henkilökohtaista havainnointia ja sekundääristä aineistoa organisaatiosta. Tämä Pro Gradu tutkimus on toteutettu kvalitatiivisella otteella, sisältäen joitakin kvantitatiivisia metodin piirteitä.
Resumo:
This is a Self-study about my role as a teacher, driven by the question: "How do I improve my practice?" (Whitehead, 1989)? In this study, I explored the discomfort that I had with the way that I had been teaching. Specifically, I worked to uncover the reasons behind my obsessive (mis)management of my students. I wrote of how I came to give my Self permission for this critique: how I came to know that all knowledge is a construction, and that my practice, too, is a construction. I grounded this journey within my experiences. I constructed these experiences in narrative fomi in order to reach a greater understanding of how I came to be the teacher I initially was. I explored metaphors that impacted my practice, re-constructed them, and saw more clearly the assumptions and influences that have guided my teaching. I centred my inquiry into my teaching within an Action Reflection methodology, bon-owing Jack Whitehead's (1989) term to describe my version of Action Research. I relied upon the embedded cyclical pattern of Action Reflection to understand my teaching Self: beginning from a critical moment, reflecting upon it, and then taking appropriate action, and continuing in this way, working to improve my practice. To understand these critical moments, I developed a personal definition of critical literacy. I then tumed this definition inward. In treating my practice as a textual production, I applied critical literacy as a framework in coming to know and understand the construction that is my teaching. I grounded my thesis journey within my Self, positioning my study within my experiences of being a grade 1 teacher struggling to teach critical literacy. I then repositioned my journey to that of a grade 1 teacher struggling to use critical literacy to improve my practice. This journey, then, is about the transition from critical literacyit as-subject to critical literacy-as-instmctional-method in improving my practice. I joumeyed inwards, using a critical moment to build new understandings, leading me to the next critical moment, and continued in this cyclical way. I worked in this meandering yet deliberate way to reach a new place in my teaching: one that is more inclusive of all the voices in my room. I concluded my journey with a beginning: a beginning of re-visioning my practice. In telling the stories of my journey, of my teaching, of my experiences, I changed into the teacher that I am more comfortable with. I've come to the frightening conclusion that I am the decisive element in the classroom. It's my personal approach that creates the climate. It's my daily mood that makes the weather As a teacher, I possess a tremendous power to make a person's life miserable or joyous. I can be a tool of torture or an instrument of inspiration. I can humiliate or humour, hurt or heal. In all situations, it is my response that decides whether a crisis will be escalated or de-escalated and a person humanized or de-humanized. (Ginott, as cited in Buscaglia, 2002, p. 22)
Resumo:
Bien que les champignons soient régulièrement utilisés comme modèle d'étude des systèmes eucaryotes, leurs relations phylogénétiques soulèvent encore des questions controversées. Parmi celles-ci, la classification des zygomycètes reste inconsistante. Ils sont potentiellement paraphylétiques, i.e. regroupent de lignées fongiques non directement affiliées. La position phylogénétique du genre Schizosaccharomyces est aussi controversée: appartient-il aux Taphrinomycotina (précédemment connus comme archiascomycetes) comme prédit par l'analyse de gènes nucléaires, ou est-il plutôt relié aux Saccharomycotina (levures bourgeonnantes) tel que le suggère la phylogénie mitochondriale? Une autre question concerne la position phylogénétique des nucléariides, un groupe d'eucaryotes amiboïdes que l'on suppose étroitement relié aux champignons. Des analyses multi-gènes réalisées antérieurement n'ont pu conclure, étant donné le choix d'un nombre réduit de taxons et l'utilisation de six gènes nucléaires seulement. Nous avons abordé ces questions par le biais d'inférences phylogénétiques et tests statistiques appliqués à des assemblages de données phylogénomiques nucléaires et mitochondriales. D'après nos résultats, les zygomycètes sont paraphylétiques (Chapitre 2) bien que le signal phylogénétique issu du jeu de données mitochondriales disponibles est insuffisant pour résoudre l'ordre de cet embranchement avec une confiance statistique significative. Dans le Chapitre 3, nous montrons à l'aide d'un jeu de données nucléaires important (plus de cent protéines) et avec supports statistiques concluants, que le genre Schizosaccharomyces appartient aux Taphrinomycotina. De plus, nous démontrons que le regroupement conflictuel des Schizosaccharomyces avec les Saccharomycotina, venant des données mitochondriales, est le résultat d'un type d'erreur phylogénétique connu: l'attraction des longues branches (ALB), un artéfact menant au regroupement d'espèces dont le taux d'évolution rapide n'est pas représentatif de leur véritable position dans l'arbre phylogénétique. Dans le Chapitre 4, en utilisant encore un important jeu de données nucléaires, nous démontrons avec support statistique significatif que les nucleariides constituent le groupe lié de plus près aux champignons. Nous confirmons aussi la paraphylie des zygomycètes traditionnels tel que suggéré précédemment, avec support statistique significatif, bien que ne pouvant placer tous les membres du groupe avec confiance. Nos résultats remettent en cause des aspects d'une récente reclassification taxonomique des zygomycètes et de leurs voisins, les chytridiomycètes. Contrer ou minimiser les artéfacts phylogénétiques telle l'attraction des longues branches (ALB) constitue une question récurrente majeure. Dans ce sens, nous avons développé une nouvelle méthode (Chapitre 5) qui identifie et élimine dans une séquence les sites présentant une grande variation du taux d'évolution (sites fortement hétérotaches - sites HH); ces sites sont connus comme contribuant significativement au phénomène d'ALB. Notre méthode est basée sur un test de rapport de vraisemblance (likelihood ratio test, LRT). Deux jeux de données publiés précédemment sont utilisés pour démontrer que le retrait graduel des sites HH chez les espèces à évolution accélérée (sensibles à l'ALB) augmente significativement le support pour la topologie « vraie » attendue, et ce, de façon plus efficace comparée à d'autres méthodes publiées de retrait de sites de séquences. Néanmoins, et de façon générale, la manipulation de données préalable à l'analyse est loin d’être idéale. Les développements futurs devront viser l'intégration de l'identification et la pondération des sites HH au processus d'inférence phylogénétique lui-même.
Resumo:
Background: An important challenge in conducting social research of specific relevance to harm reduction programs is locating hidden populations of consumers of substances like cannabis who typically report few adverse or unwanted consequences of their use. Much of the deviant, pathologized perception of drug users is historically derived from, and empirically supported, by a research emphasis on gaining ready access to users in drug treatment or in prison populations with higher incidence of problems of dependence and misuse. Because they are less visible, responsible recreational users of illicit drugs have been more difficult to study. Methods: This article investigates Respondent Driven Sampling (RDS) as a method of recruiting experienced marijuana users representative of users in the general population. Based on sampling conducted in a multi-city study (Halifax, Montreal, Toronto, and Vancouver), and compared to samples gathered using other research methods, we assess the strengths and weaknesses of RDS recruitment as a means of gaining access to illicit substance users who experience few harmful consequences of their use. Demographic characteristics of the sample in Toronto are compared with those of users in a recent household survey and a pilot study of Toronto where the latter utilized nonrandom self-selection of respondents. Results: A modified approach to RDS was necessary to attain the target sample size in all four cities (i.e., 40 'users' from each site). The final sample in Toronto was largely similar, however, to marijuana users in a random household survey that was carried out in the same city. Whereas well-educated, married, whites and females in the survey were all somewhat overrepresented, the two samples, overall, were more alike than different with respect to economic status and employment. Furthermore, comparison with a self-selected sample suggests that (even modified) RDS recruitment is a cost-effective way of gathering respondents who are more representative of users in the general population than nonrandom methods of recruitment ordinarily produce. Conclusions: Research on marijuana use, and other forms of drug use hidden in the general population of adults, is important for informing and extending harm reduction beyond its current emphasis on 'at-risk' populations. Expanding harm reduction in a normalizing context, through innovative research on users often overlooked, further challenges assumptions about reducing harm through prohibition of drug use and urges consideration of alternative policies such as decriminalization and legal regulation.
Resumo:
Natural systems are inherently non linear. Recurrent behaviours are typical of natural systems. Recurrence is a fundamental property of non linear dynamical systems which can be exploited to characterize the system behaviour effectively. Cross recurrence based analysis of sensor signals from non linear dynamical system is presented in this thesis. The mutual dependency among relatively independent components of a system is referred as coupling. The analysis is done for a mechanically coupled system specifically designed for conducting experiment. Further, cross recurrence method is extended to the actual machining process in a lathe to characterize the chatter during turning. The result is verified by permutation entropy method. Conventional linear methods or models are incapable of capturing the critical and strange behaviours associated with the dynamical process. Hence any effective feature extraction methodologies should invariably gather information thorough nonlinear time series analysis. The sensor signals from the dynamical system normally contain noise and non stationarity. In an effort to get over these two issues to the maximum possible extent, this work adopts the cross recurrence quantification analysis (CRQA) methodology since it is found to be robust against noise and stationarity in the signals. The study reveals that the CRQA is capable of characterizing even weak coupling among system signals. It also divulges the dependence of certain CRQA variables like percent determinism, percent recurrence and entropy to chatter unambiguously. The surrogate data test shows that the results obtained by CRQA are the true properties of the temporal evolution of the dynamics and contain a degree of deterministic structure. The results are verified using permutation entropy (PE) to detect the onset of chatter from the time series. The present study ascertains that this CRP based methodology is capable of recognizing the transition from regular cutting to the chatter cutting irrespective of the machining parameters or work piece material. The results establish this methodology to be feasible for detection of chatter in metal cutting operation in a lathe.
Resumo:
In classical field theory, the ordinary potential V is an energy density for that state in which the field assumes the value ¢. In quantum field theory, the effective potential is the expectation value of the energy density for which the expectation value of the field is ¢o. As a result, if V has several local minima, it is only the absolute minimum that corresponds to the true ground state of the theory. Perturbation theory remains to this day the main analytical tool in the study of Quantum Field Theory. However, since perturbation theory is unable to uncover the whole rich structure of Quantum Field Theory, it is desirable to have some method which, on one hand, must go beyond both perturbation theory and classical approximation in the points where these fail, and at that time, be sufficiently simple that analytical calculations could be performed in its framework During the last decade a nonperturbative variational method called Gaussian effective potential, has been discussed widely together with several applications. This concept was described as a means of formalizing our intuitive understanding of zero-point fluctuation effects in quantum mechanics in a way that carries over directly to field theory.
Resumo:
The role of the bridging ligand on the effective Heisenberg coupling parameters is analyzed in detail. This analysis strongly suggests that the ligand-to-metal charge transfer excitations are responsible for a large part of the final value of the magnetic coupling constant. This permits us to suggest a variant of the difference dedicated configuration interaction (DDCI) method, presently one of the most accurate and reliable for the evaluation of magnetic effective interactions. This method treats the bridging ligand orbitals mediating the interaction at the same level than the magnetic orbitals and preserves the high quality of the DDCI results while being much less computationally demanding. The numerical accuracy of the new approach is illustrated on various systems with one or two magnetic electrons per magnetic center. The fact that accurate results can be obtained using a rather reduced configuration interaction space opens the possibility to study more complex systems with many magnetic centers and/or many electrons per center.