849 resultados para Operação cut-off
Resumo:
Short-beaked echidnas have an impressive ability to submerge completely into soil or sand and remain there, cryptic, for long periods. This poses questions about how they manage their respiration, cut off from a free flow of gases. We measured the gradient in oxygen partial pressure (P-O2) away from the snouts of buried echidnas and oxygen consumption (V-O2) in five individuals under similar conditions, in two substrates with different air-filled porosities (f(a)). A theoretical diffusion model indicated that diffusion alone was insufficient to account for the flux of oxygen required to meet measured rates of V-O2. However, it was noticed that echidnas often showed periodic movements of the anterior part of the body, as if such movements were a deliberate effort to flush the tidal air space surrounding their nostrils. These 'flushing movements' were subsequently found to temporarily increase the levels of interstitial oxygen in the soil around the head region. Flushing movements were more frequent while V-O2 was higher during the burrowing process, and also in substrate with lower fa. We conclude that oxygen supply to buried echidnas is maintained by diffusion through the soil augmented by periodic flushing movements, which ventilate the tidal airspace that surrounds the nostrils.
Resumo:
How can empirical evidence of adverse effects from exposure to noxious agents, which is often incomplete and uncertain, be used most appropriately to protect human health? We examine several important questions on the best uses of empirical evidence in regulatory risk management decision-making raised by the US Environmental Protection Agency (EPA)'s science-policy concerning uncertainty and variability in human health risk assessment. In our view, the US EPA (and other agencies that have adopted similar views of risk management) can often improve decision-making by decreasing reliance on default values and assumptions, particularly when causation is uncertain. This can be achieved by more fully exploiting decision-theoretic methods and criteria that explicitly account for uncertain, possibly conflicting scientific beliefs and that can be fully studied by advocates and adversaries of a policy choice, in administrative decision-making involving risk assessment. The substitution of decision-theoretic frameworks for default assumption-driven policies also allows stakeholder attitudes toward risk to be incorporated into policy debates, so that the public and risk managers can more explicitly identify the roles of risk-aversion or other attitudes toward risk and uncertainty in policy recommendations. Decision theory provides a sound scientific way explicitly to account for new knowledge and its effects on eventual policy choices. Although these improvements can complicate regulatory analyses, simplifying default assumptions can create substantial costs to society and can prematurely cut off consideration of new scientific insights (e.g., possible beneficial health effects from exposure to sufficiently low 'hormetic' doses of some agents). In many cases, the administrative burden of applying decision-analytic methods is likely to be more than offset by improved effectiveness of regulations in achieving desired goals. Because many foreign jurisdictions adopt US EPA reasoning and methods of risk analysis, it may be especially valuable to incorporate decision-theoretic principles that transcend local differences among jurisdictions.
Resumo:
B-type natriuretic peptide (BNP) is the first biomarker of proven value in screening for left ventricular dysfunction. The availability of point-of-care testing has escalated clinical interest and the resultant research is defining a role for BNP in the investigation and treatment of critically ill patients. This review was undertaken with the aim of collecting and assimilating current evidence regarding the use of BNP assay in the evaluation of myocardial dysfunction in critically ill humans. The information is presented in a format based upon organ system and disease category. BNP assay has been studied in a spectrum of clinical conditions ranging from acute dyspnoea to subarachnoid haemorrhage. Its role in diagnosis, assessment of disease severity, risk stratification and prognostic evaluation of cardiac dysfunction appears promising, but requires further elaboration. The heterogeneity of the critically ill population appears to warrant a range of cut-off values. Research addressing progressive changes in BNP concentration is hindered by infrequent assay and appears unlikely to reflect the critically ill patient's rapidly changing haemodynamics. Multi-marker strategies may prove valuable in prognostication and evaluation of therapy in a greater variety of illnesses. Scant data exist regarding the use of BNP assay to alter therapy or outcome. It appears that BNP assay offers complementary information to conventional approaches for the evaluation of cardiac dysfunction. Continued research should augment the validity of BNP assay in the evaluation of myocardial function in patients with life-threatening illness.
Resumo:
Objective: An estimation of cut-off points for the diagnosis of diabetes mellitus (DM) based on individual risk factors. Methods: A subset of the 1991 Oman National Diabetes Survey is used, including all patients with a 2h post glucose load >= 200 mg/dl (278 subjects) and a control group of 286 subjects. All subjects previously diagnosed as diabetic and all subjects with missing data values were excluded. The data set was analyzed by use of the SPSS Clementine data mining system. Decision Tree Learners (C5 and CART) and a method for mining association rules (the GRI algorithm) are used. The fasting plasma glucose (FPG), age, sex, family history of diabetes and body mass index (BMI) are input risk factors (independent variables), while diabetes onset (the 2h post glucose load >= 200 mg/dl) is the output (dependent variable). All three techniques used were tested by use of crossvalidation (89.8%). Results: Rules produced for diabetes diagnosis are: A- GRI algorithm (1) FPG>=108.9 mg/dl, (2) FPG>=107.1 and age>39.5 years. B- CART decision trees: FPG >=110.7 mg/dl. C- The C5 decision tree learner: (1) FPG>=95.5 and 54, (2) FPG>=106 and 25.2 kg/m2. (3) FPG>=106 and =133 mg/dl. The three techniques produced rules which cover a significant number of cases (82%), with confidence between 74 and 100%. Conclusion: Our approach supports the suggestion that the present cut-off value of fasting plasma glucose (126 mg/dl) for the diagnosis of diabetes mellitus needs revision, and the individual risk factors such as age and BMI should be considered in defining the new cut-off value.
Resumo:
This research deals with the production of pectic oligosaccharides (POS) from agro-industrial residues, with specific focus on development of continuous cross flow enzyme membrane reactor. Pectic oligosaccharides have recently gained attention due to their prebiotic activity. Lack of information on the continuous production of POS from agro-industrial residues formed the basis for the present study. Four residues i.e sugar beet pulp, onion hulls, pressed pumpkin cake and berry pomace were taken to study their pectin content. Based on the presence of higher galacturonic acid and arabinose (both homogalacturonan and rhamnogalacturonan) in sugar beet pulp and galacturonic acid (only homogalacturonan) in onion hulls, further optimization of different extraction methods of pectin (causing minimum damage to pectic chain) from these residues were done. The most suitable extractant for sugar beet pulp and onion hulls were nitric acid and sodium hexametaphosphate respectively. Further the experiments on the continuous production of POS from sugar beet pulp in an enzyme membrane reactor was initiated. Several optimization experiments indicated the optimum enzyme (Viscozyme) as well as feed concentration (25 g/L) to be used for producing POS from sugar beet pulp in an enzyme membrane reactor. The results highlighted that steady state POS production with volumetric and specific productivity of 22g/L/h and 11 g/gE/h respectively could be achieved by continuous cross flow filtration of sugar beet pulp pectic extract over 10 kDa membrane at residence time of 20 min. The POS yield of about 80% could be achieved using above conditions. Also, in this thesis preliminary experiments on the production and characterization of POS from onion hulls were conducted. The results revelaed that the most suitable enzyme for POS production from onion hulls is endo-polygalacturonase M2. The POS produced from onion hulls were present in the form of DP1 -DP10 in substituted as well as unsubstituted forms. This study clearly demonstrates that continuous production of POS from pectin rich sources can be achieved by using cross flow continuous enzyme membrane reactor.
Resumo:
Purpose To investigate the utility of uncorrected visual acuity measures in screening for refractive error in white school children aged 6-7-years and 12-13-years. Methods The Northern Ireland Childhood Errors of Refraction (NICER) study used a stratified random cluster design to recruit children from schools in Northern Ireland. Detailed eye examinations included assessment of logMAR visual acuity and cycloplegic autorefraction. Spherical equivalent refractive data from the right eye were used to classify significant refractive error as myopia of at least 1DS, hyperopia as greater than +3.50DS and astigmatism as greater than 1.50DC, whether it occurred in isolation or in association with myopia or hyperopia. Results Results are presented from 661 white 12-13-year-old and 392 white 6-7-year-old school-children. Using a cut-off of uncorrected visual acuity poorer than 0.20 logMAR to detect significant refractive error gave a sensitivity of 50% and specificity of 92% in 6-7-year-olds and 73% and 93% respectively in 12-13-year-olds. In 12-13-year-old children a cut-off of poorer than 0.20 logMAR had a sensitivity of 92% and a specificity of 91% in detecting myopia and a sensitivity of 41% and a specificity of 84% in detecting hyperopia. Conclusions Vision screening using logMAR acuity can reliably detect myopia, but not hyperopia or astigmatism in school-age children. Providers of vision screening programs should be cognisant that where detection of uncorrected hyperopic and/or astigmatic refractive error is an aspiration, current UK protocols will not effectively deliver.
Resumo:
A review of ultrafiltration (UF) theory and equipment has been made. Dextran is fractionated industrially by ethanol precipitation, which is a high energy intensive process. The aims of this work were to investigate the fractionation of dextran using UF and to compare the efficiency and costs of UF fractionation with ethanol fractionation. This work is the continuation of research conducted at Aston, which was concerned with the fractionation of dextran using gel permeation chromatography (GPC) and hollow fibre UF membranes supplied by Amicon Ltd. Initial laboratory work centred on determining the most efficient make and configuration of membrane. UF membranes of the Millipore cassette configuration, and the DDS flat-sheet configuration, were examined for the fracationation of low molecular weight (MW) dextran. When compared to Amicon membranes, these membranes were found to be inferior. DDS membranes of 25 000 and 50 000 MW cut-offs were shown to be capable of fractionating high MW dextran with the same efficiency as GPC. The Amicon membranes had an efficiency comparable to that of ethanol fractionation. To increase this efficiency a theoretical UF membrane cascade was adopted to utilize favourable characteristics encountered in batch mode membrane experiments. The four stage cascade used recycled permeates in a counter- current direction to retentate flow, and was operated 24 hours per day controlled by a computer. Using 5 000 MW cut-off membranes the cascade improved the batch efficiency by at least 10% for a fractionation at 6 000 MW. Economic comparisons of ethanol fractionation, combined GPC and UF fractionation, and UF fractionation of dextran were undertaken. On an economic basis GPC was the best method for high MW dextran fractionation. When compared with a plant producing 100 tonnes pa of clinical dextran, by ethanol fractionation, a combined GPC and UF cascade fractionation could produce savings on operating costs and an increased dextran yield of 5%.
Resumo:
Sepsis continues to be a major cause of morbidity and mortality as it can readily lead tosevere sepsis, septic shock, multiple organ failure and death. The onset can be rapid and difficult to define clinically. Despite the numerous candidate markers proposed in the literature, to date a serum marker for sepsis has not been found. The aim of this study was to assay the serum of clinically diagnosed patients with eithera Gram-negative or Gram- positive bacterial sepsis for elevated levels of nine potentialmarkers of sepsis, using commercially produced enzyme linked immunosorbent assays(ELISA). The purpose was to find a test marker for sepsis that would be helpful toclinicians in cases of uncertain sepsis and consequently expose false positive BC'scaused by skin or environmental contaminants. Nine test markers were assayed including IL-6, IL-I 0, ILI2, TNF-α, lipopolysaccharide binding protein, procalcitonin, sE-selectin, sICAM -1 and a potential differential marker for Gram-positive sepsis- anti-lipid S antibody. A total of 445 patients were enrolled into this study from the Queen Elizabeth Hospital and Selly Oak Hospital (Birmingham). The results showed that all the markers were elevated in patients with sepsis and that patients with a Gram-negative sepsis consistently produced higher median/range serum levels than those with a Gram-positive sepsis. No single marker was able to identify all the septic patients. Combining two markers caused the sensitivities and specificities for a diagnosis of sepsis to increase to within a 90% to 100% range. By a process of elimination the markers that survived into the last phase were IL-6 with sICAM -1, and anti-lipid S IgG assays Defining cut-off levels for a diagnosis of sepsis became problematic and a semi-blind trial was devised to test the markers in the absence of both clinical details and positive blood cultures. Patients with pyrexia of unknown origin and negative BC were included in this phase (4). The results showed that IL-6 with sICAM-l are authentic markers of sepsis. There was 82% agreement between the test marker diagnosis and the clinical diagnosis for sepsis in patients with a Gram-positive BC and 78% agreement in cases of Gram-negative Be. In the PUO group the test markers identified 12 cases of sepsis and the clinical diagnosis 15. The markers were shown to differentiate between early sepsis and sepsis, inflammatory responses and infection. Anti-lipid S with IL-6 proved be a sensitive marker for Gram-positive infections/sepsis.
Resumo:
The thesis will show how to equalise the effect of quantal noise across spatial frequencies by keeping the retinal flux (If-2) constant. In addition, quantal noise is used to study the effect of grating area and spatial frequency on contrast sensitivity resulting in the extension of the new contrast detection model describing the human contrast detection system as a simple image processor. According to the model the human contrast detection system comprises low-pass filtering due to ocular optics, addition of light dependent noise at the event of quantal absorption, high-pass filtering due to the neural visual pathways, addition of internal neural noise, after which detection takes place by a local matched filter, whose sampling efficiency decreases as grating area is increased. Furthermore, this work will demonstrate how to extract both the optical and neural modulation transfer functions of the human eye. The neural transfer function is found to be proportional to spatial frequency up to the local cut-off frequency at eccentricities of 0 - 37 deg across the visual field. The optical transfer function of the human eye is proposed to be more affected by the Stiles-Crawford -effect than generally assumed in the literature. Similarly, this work questions the prevailing ideas about the factors limiting peripheral vision by showing that peripheral optical acts as a low-pass filter in normal viewing conditions, and therefore the effect of peripheral optics is worse than generally assumed.
Resumo:
This thesis consisted of two major parts, one determining the masking characteristics of pixel noise and the other investigating the properties of the detection filter employed by the visual system. The theoretical cut-off frequency of white pixel noise can be defined from the size of the noise pixel. The empirical cut-off frequency, i.e. the largest size of noise pixels that mimics the effect of white noise in detection, was determined by measuring contrast energy thresholds for grating stimuli in the presence of spatial noise consisting of noise pixels of various sizes and shapes. The critical i.e. minimum number of noise pixels per grating cycle needed to mimic the effect of white noise in detection was found to decrease with the bandwidth of the stimulus. The shape of the noise pixels did not have any effect on the whiteness of pixel noise as long as there was at least the minimum number of noise pixels in all spatial dimensions. Furthermore, the masking power of white pixel noise is best described when the spectral density is calculated by taking into account all the dimensions of noise pixels, i.e. width, height, and duration, even when there is random luminance only in one of these dimensions. The properties of the detection mechanism employed by the visual system were studied by measuring contrast energy thresholds for complex spatial patterns as a function of area in the presence of white pixel noise. Human detection efficiency was obtained by comparing human performance with an ideal detector. The stimuli consisted of band-pass filtered symbols, uniform and patched gratings, and point stimuli with randomised phase spectra. In agreement with the existing literature, the detection performance was found to decline with the increasing amount of detail and contour in the stimulus. A measure of image complexity was developed and successfully applied to the data. The accuracy of the detection mechanism seems to depend on the spatial structure of the stimulus and the spatial spread of contrast energy.
Resumo:
This thesis studied the effect of (i) the number of grating components and (ii) parameter randomisation on root-mean-square (r.m.s.) contrast sensitivity and spatial integration. The effectiveness of spatial integration without external spatial noise depended on the number of equally spaced orientation components in the sum of gratings. The critical area marking the saturation of spatial integration was found to decrease when the number of components increased from 1 to 5-6 but increased again at 8-16 components. The critical area behaved similarly as a function of the number of grating components when stimuli consisted of 3, 6 or 16 components with different orientations and/or phases embedded in spatial noise. Spatial integration seemed to depend on the global Fourier structure of the stimulus. Spatial integration was similar for sums of two vertical cosine or sine gratings with various Michelson contrasts in noise. The critical area for a grating sum was found to be a sum of logarithmic critical areas for the component gratings weighted by their relative Michelson contrasts. The human visual system was modelled as a simple image processor where the visual stimuli is first low-pass filtered by the optical modulation transfer function of the human eye and secondly high-pass filtered, up to the spatial cut-off frequency determined by the lowest neural sampling density, by the neural modulation transfer function of the visual pathways. The internal noise is then added before signal interpretation occurs in the brain. The detection is mediated by a local spatially windowed matched filter. The model was extended to include complex stimuli and its applicability to the data was found to be successful. The shape of spatial integration function was similar for non-randomised and randomised simple and complex gratings. However, orientation and/or phase randomised reduced r.m.s contrast sensitivity by a factor of 2. The effect of parameter randomisation on spatial integration was modelled under the assumption that human observers change the observer strategy from cross-correlation (i.e., a matched filter) to auto-correlation detection when uncertainty is introduced to the task. The model described the data accurately.
Resumo:
Purpose: to evaluate changes in tear metrics and ocular signs induced by six months of silicone-hydrogel contact lens wear and the difference in baseline characteristics between those who successfully continued in contact lens wear compared to those that did not. Methods: Non-invasive Keratograph, Tearscope and fluorescein tear break-up times (TBUTs), tear meniscus height, bulbar and limbal hyperaemia, lid-parallel conjunctival folds (LIPCOF), phenol red thread, fluorescein and lissamine-green staining, and lid wiper epitheliopathy were measured on 60 new contact lens wearers fitted with monthly silicone-hydrogels (average age 36 ± 14 years, 40 females). Symptoms were evaluated by the Ocular Surface Disease Index (OSDI). After six months full time contact lens wear the above metrics were re-measured on those patients still in contact lens wear (n= 33). The initial measurements were also compared between the group still wearing lenses after six months and those who had ceased lens wear (n= 27). Results: There were significant changes in tear meniscus height (p= 0.031), bulbar hyperaemia (p= 0.011), fluorescein TBUT (p= 0.027), corneal (p= 0.007) and conjunctival (p= 0.009) staining, LIPCOF (p= 0.011) and lid wiper epitheliopathy (p= 0.002) after six months of silicone-hydrogel wear. Successful wearers had a higher non-invasive (17.0 ± 8.2. s vs 12.0 ± 5.6. s; p= 0.001) and fluorescein (10.7 ± 6.4. s vs 7.5 ± 4.7. s; p= 0.001) TBUT than drop-outs, although OSDI (cut-off 4.2) was also a strong predictor of success. Conclusion: Silicone-hydrogel lenses induced significant changes in the tear film and ocular surface as well as lid margin staining. Wettability of the ocular surface is the main factor affecting contact lens drop-out. © 2013 British Contact Lens Association.
Resumo:
Purpose: To optimize anterior eye fluorescein viewing and image capture. Design: Prospective experimental investigation. Methods: The spectral radiance of ten different models of slit-lamp blue luminance and the spectral transmission of three barrier filters were measured. Optimal clinical instillation of fluorescein was evaluated by a comparison of four different instillation methods of fluorescein into 10 subjects. Two methods used a floret, and two used minims of different concentration. The resulting fluorescence was evaluated for quenching effects and efficiency over time. Results: Spectral radiance of the blue illumination typically had an average peak at 460 nm. Comparison between three slit-lamps of the same model showed a similar spectral radiance distribution. Of the slit-lamps examined, 8.3% to 50.6% of the illumination output was optimized for >80% fluorescein excitation, and 1.2% to 23.5% of the illumination overlapped with that emitted by the fluorophore. The barrier filters had an average cut-off at 510 to 520 nm. Quenching was observed for all methods of fluorescein instillation. The moistened floret and the 1% minim reached a useful level of fluorescence in on average ∼20s (∼2.5× faster than the saturated floret and 2% minim) and this lasted for ∼160 seconds. Conclusions: Most slit-lamps' blue light and yellow barrier filters are not optimal for fluorescein viewing and capture. Instillation of fluorescein using a moistened floret or 1% minim seems most clinically appropriate as lower quantities and concentrations of fluorescein improve the efficiency of clinical examination. © 2006 Elsevier Inc. All rights reserved.
Resumo:
AIM To develop a short, enhanced functional ability Quality of Vision (faVIQ) instrument based on previous questionnaires employing comprehensive modern statistical techniques to ensure the use of an appropriate response scale, items and scoring of the visual related difficulties experienced by patients with visual impairment. METHODS Items in current quality-of-life questionnaires for the visually impaired were refined by a multi-professional group and visually impaired focus groups. The resulting 76 items were completed by 293 visually impaired patients with stable vision on two occasions separated by a month. The faVIQ scores of 75 patients with no ocular pathology were compared to 75 age and gender matched patients with visual im pairm ent. RESULTS Rasch analysis reduced the faVIQ items to 27. Correlation to standard visual metrics was moderate (r=0.32-0.46) and to the NEI-VFQ was 0.48. The faVIQ was able to clearly discriminate between age and gender matched populations with no ocular pathology and visual impairment with an index of 0.983 and 95% sensitivity and 95% specificity using a cut off of 29. CONCLUSION The faVIQ allows sensitive assessm ent of quality-of-life in the visually im paired and should support studies which evaluate the effectiveness of low vision rehabilitation services. © Copyright International Journal of Ophthalmology Press.
Resumo:
Partially supported by the Bulgarian Science Fund contract with TU Varna, No 487.