907 resultados para Cardiac Output, Low
Resumo:
A low temperature lignocellulose pretreatment process was developed using acid-catalysed mixtures of alkylene carbonate and alkylene glycol. Pretreatment of sugarcane bagasse with mixtures of ethylene carbonate (EC) and ethylene glycol (EG) was more effective than that with mixtures of propylene carbonate (PC) and propylene glycol (PG). These mixtures were more effective than the individual components in making bagasse cellulose more amenable to cellulase digestion. Glucan digestibilities of ≥87% could be achieved with a wide range of EC to EG ratios from 9:1 to 1:1 (w/w). Pretreatment of bagasse by the EC/EG mixture with a ratio of 4:1 in the presence of 1.2% H2SO4 at 90 °C for 30 min led to the highest glucan enzymatic digestibility of 93%. The high glucan digestibilities obtained under these acidic conditions were due to (a) the ability of alkylene carbonate to cause significant biomass size reduction, (b) the ability of alkylene glycol to cause biomass defibrillation, (c) the ability of alkylene carbonate and alkylene glycol to remove xylan and lignin, and (d) the magnified above attributes in the mixtures of alkylene carbonate and alkylene glycol.
Resumo:
Objectives This randomised, controlled trial compared the effectiveness of 0.12% chlorhexidine (CHX) gel and 304% fluoride toothpaste to prevent early childhood caries (ECC) in a birth cohort by 24 months. Methods The participants were randomised to receive either (i) twice daily toothbrushing with toothpaste and once daily 0.12% CHX gel (n = 110) or (ii) twice daily toothbrushing with toothpaste only (study controls) (n = 89). The primary outcome measured was caries incidence and the secondary outcome was percentage of children with mutans streptococci (MS). All mothers were contacted by telephone at 6, 12, and 18 months. At 24 months, all children were examined at a community dental clinic. Results At 24 months, the caries prevalence was 5% (3/61) in the CHX and 7% (4/58) in the controls (P = 0.7). There were no differences in percentages of MS-positive children between the CHX and control groups (54%vs 53%). Only 20% applied the CHX gel once daily and 80% less than once daily. Conclusions Toothbrushing using 304% fluoride toothpaste with or without the application of chlorhexidine gel (0.12%) reduces ECC from 23% found in the general community to 5–7%. The lack of effect with chlorhexidine is likely to be due to low compliance.
Resumo:
OBJECTIVE To determine whether a microsatellite polymorphism located towards the 3' end of the low density lipoprotein receptor gene (LDLR) is associated with obesity. DESIGN A cross-sectional case-control study. SUBJECTS One hundred and seven obese individuals, defined as a body mass index (BMI) ≤ 26 kg/m2, and 163 lean individuals, defined as a BMI < 26 kg/m2. MEASUREMENTS BMI, blood pressure, serum lipids, alleles of LDLR microsatellite (106 bp, 108 bp and 112 bp). RESULTS There was a significant association between variants of the LDLR microsatellite and obesity, in the overall tested population, due to a contributing effect in females (χ2 = 12.3, P = 0.002), but not in males (χ2 = 0.3, P = 0.87). In females, individuals with the 106 bp allele were more likely to be lean, while individuals with the 112 bp and/or 108 bp alleles tended to be obese. CONCLUSIONS These results suggest that in females, LDLR may play a role in the development of obesity.
Resumo:
Obese (BMI ≥ 26 kg/m 2; n = 51) and lean (BMI <26 kg/m 2; n = 61) Caucasian patients with severe, familial essential hypertension, were compared with respect to genotype and allele frequencies of a HincII RFLP of the low density lipoprotein receptor gene (LDLR). A similar analysis was performed in obese (n = 28) and lean (n = 68) normotensives. A significant association of the C allele of the T→C variant responsible for this RFLP was seen with obesity (χ 2 = 4.6, P = 0.029) in the hypertensive, but not in the normotensive, group (odds ratio = 3.0 for the CC genotype and 2.7 for CT). Furthermore, BMI tracked with genotypes of this allele in the hypertensives (P = 0.046). No significant genotypic relationship was apparent for plasma lipids. Significant linkage disequilibrium was, moreover, noted between the HincII RFLP and an ApaLI RFLP (χ 2 = 33, P<0.0005) that has previously shown even stronger association with obesity (odds ratio 19.6 for cases homozygous for the susceptibility allele and 15.2 for het-erozygotes). The present study therefore adds to our previous evidence implicating LDLR as a locus for obesity in patients with essential hypertension.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
The growth of suitable tissue to replace natural blood vessels requires a degradable scaffold material that is processable into porous structures with appropriate mechanical and cell growth properties. This study investigates the fabrication of degradable, crosslinkable prepolymers of l-lactide-co-trimethylene carbonate into porous scaffolds by electrospinning. After crosslinking by γ-radiation, dimensionally stable scaffolds were obtained with up to 56% trimethylene carbonate incorporation. The fibrous mats showed Young’s moduli closely matching human arteries (0.4–0.8 MPa). Repeated cyclic extension yielded negligible change in mechanical properties, demonstrating the potential for use under dynamic physiological conditions. The scaffolds remained elastic and resilient at 30% strain after 84 days of degradation in phosphate buffer, while the modulus and ultimate stress and strain progressively decreased. The electrospun mats are mechanically superior to solid films of the same materials. In vitro, human mesenchymal stem cells adhered to and readily proliferated on the three-dimensional fiber network, demonstrating that these polymers may find use in growing artificial blood vessels in vivo.
Resumo:
Streamciphers are common cryptographic algorithms used to protect the confidentiality of frame-based communications like mobile phone conversations and Internet traffic. Streamciphers are ideal cryptographic algorithms to encrypt these types of traffic as they have the potential to encrypt them quickly and securely, and have low error propagation. The main objective of this thesis is to determine whether structural features of keystream generators affect the security provided by stream ciphers.These structural features pertain to the state-update and output functions used in keystream generators. Using linear sequences as keystream to encrypt messages is known to be insecure. Modern keystream generators use nonlinear sequences as keystream.The nonlinearity can be introduced through a keystream generator's state-update function, output function, or both. The first contribution of this thesis relates to nonlinear sequences produced by the well-known Trivium stream cipher. Trivium is one of the stream ciphers selected in a final portfolio resulting from a multi-year project in Europe called the ecrypt project. Trivium's structural simplicity makes it a popular cipher to cryptanalyse, but to date, there are no attacks in the public literature which are faster than exhaustive keysearch. Algebraic analyses are performed on the Trivium stream cipher, which uses a nonlinear state-update and linear output function to produce keystream. Two algebraic investigations are performed: an examination of the sliding property in the initialisation process and algebraic analyses of Trivium-like streamciphers using a combination of the algebraic techniques previously applied separately by Berbain et al. and Raddum. For certain iterations of Trivium's state-update function, we examine the sets of slid pairs, looking particularly to form chains of slid pairs. No chains exist for a small number of iterations.This has implications for the period of keystreams produced by Trivium. Secondly, using our combination of the methods of Berbain et al. and Raddum, we analysed Trivium-like ciphers and improved on previous on previous analysis with regards to forming systems of equations on these ciphers. Using these new systems of equations, we were able to successfully recover the initial state of Bivium-A.The attack complexity for Bivium-B and Trivium were, however, worse than exhaustive keysearch. We also show that the selection of stages which are used as input to the output function and the size of registers which are used in the construction of the system of equations affect the success of the attack. The second contribution of this thesis is the examination of state convergence. State convergence is an undesirable characteristic in keystream generators for stream ciphers, as it implies that the effective session key size of the stream cipher is smaller than the designers intended. We identify methods which can be used to detect state convergence. As a case study, theMixer streamcipher, which uses nonlinear state-update and output functions to produce keystream, is analysed. Mixer is found to suffer from state convergence as the state-update function used in its initialisation process is not one-to-one. A discussion of several other streamciphers which are known to suffer from state convergence is given. From our analysis of these stream ciphers, three mechanisms which can cause state convergence are identified.The effect state convergence can have on stream cipher cryptanalysis is examined. We show that state convergence can have a positive effect if the goal of the attacker is to recover the initial state of the keystream generator. The third contribution of this thesis is the examination of the distributions of bit patterns in the sequences produced by nonlinear filter generators (NLFGs) and linearly filtered nonlinear feedback shift registers. We show that the selection of stages used as input to a keystream generator's output function can affect the distribution of bit patterns in sequences produced by these keystreamgenerators, and that the effect differs for nonlinear filter generators and linearly filtered nonlinear feedback shift registers. In the case of NLFGs, the keystream sequences produced when the output functions take inputs from consecutive register stages are less uniform than sequences produced by NLFGs whose output functions take inputs from unevenly spaced register stages. The opposite is true for keystream sequences produced by linearly filtered nonlinear feedback shift registers.