870 resultados para objective
Resumo:
BACKGROUND: A number of epidemiological studies have examined the adverse effect of air pollution on mortality and morbidity. Also, several studies have investigated the associations between air pollution and specific-cause diseases including arrhythmia, myocardial infarction, and heart failure. However, little is known about the relationship between air pollution and the onset of hypertension. OBJECTIVE: To explore the risk effect of particulate matter air pollution on the emergency hospital visits (EHVs) for hypertension in Beijing, China. METHODS: We gathered data on daily EHVs for hypertension, fine particulate matter less than 2.5 microm in aerodynamic diameter (PM(2.5)), particulate matter less than 10 microm in aerodynamic diameter (PM(10)), sulfur dioxide, and nitrogen dioxide in Beijing, China during 2007. A time-stratified case-crossover design with distributed lag model was used to evaluate associations between ambient air pollutants and hypertension. Daily mean temperature and relative humidity were controlled in all models. RESULTS: There were 1,491 EHVs for hypertension during the study period. In single pollutant models, an increase in 10 microg/m(3) in PM(2.5) and PM(10) was associated with EHVs for hypertension with odds ratios (overall effect of five days) of 1.084 (95% confidence interval (CI): 1.028, 1.139) and 1.060% (95% CI: 1.020, 1.101), respectively. CONCLUSION: Elevated levels of ambient particulate matters are associated with an increase in EHVs for hypertension in Beijing, China.
Resumo:
Objective: To examine exercise-induced changes in the reward value of food during medium-term supervised exercise in obese individuals. ---------- Subjects/Methods: The study was a 12-week supervised exercise intervention prescribed to expend 500 kcal/day, 5 d/week. 34 sedentary obese males and females were identified as responders (R) or non-responders (NR) to the intervention according to changes in body composition relative to measured energy expended during exercise. Food reward (ratings of liking and wanting, and relative preference by forced choice pairs) for an array of food images was assessed before and after an acute exercise bout. ---------- Results. 20 responders and 14 non-responders were identified. R lost 5.2 kg±2.4 of total fat mass and NR lost 1.7 kg±1.4. After acute exercise, liking for all foods increased in NR compared to no change in R. Furthermore, NR showed an increase in wanting and relative preference for high-fat sweet foods. These differences were independent of 12-weeks regular exercise and weight loss. ---------- Conclusion. Individuals who showed an immediate post-exercise increase in liking and increased wanting and preference for high-fat sweet foods displayed a smaller reduction in fat mass with exercise. For some individuals, exercise increases the reward value of food and diminishes the impact of exercise on fat loss.
Resumo:
Relatively little information has been reported about foot and ankle problems experienced by nurses, despite anecdotal evidence which suggests they are common ailments. The purpose of this study was to improve knowledge about the prevalence of foot and ankle musculoskeletal disorders (MSDs) and to explore relationships between these MSDs and proposed risk factors. A review of the literature relating to work-related MSDs, MSDs in nursing, foot and lower-limb MSDs, screening for work-related MSDs, foot discomfort, footwear and the prevalence of foot problems in the community was undertaken. Based on the review, theoretical risk factors were proposed that pertained to the individual characteristics of the nurses, their work activity or their work environment. Three studies were then undertaken. A cross-sectional survey of 304 nurses, working in a large tertiary paediatric hospital, established the prevalence of foot and ankle MSDs. The survey collected information about self-reported risk factors of interest. The second study involved the clinical examination of a subgroup of 40 nurses, to examine changes in body discomfort, foot discomfort and postural sway over the course of a single work shift. Objective measurements of additional risk factors, such as individual foot posture (arch index) and the hardness of shoe midsoles, were performed. A final study was used to confirm the test-retest reliability of important aspects of the survey and key clinical measurements. Foot and ankle problems were the most common MSDs experienced by nurses in the preceding seven days (42.7% of nurses). They were the second most common MSDs to cause disability in the last 12 months (17.4% of nurses), and the third most common MSDs experienced by nurses in the last 12 months (54% of nurses). Substantial foot discomfort (Visual Analogue Scale (VAS) score of 50mm or more) was experienced by 48.5% of nurses at sometime in the last 12 months. Individual risk factors, such as obesity and the number of self-reported foot conditions (e.g., callouses, curled toes, flat feet) were strongly associated with the likelihood of experiencing foot problems in the last seven days or during the last 12 months. These risk factors showed consistent associations with disabling foot conditions and substantial foot discomfort. Some of these associations were dependent upon work-related risk factors, such as the location within the hospital and the average hours worked per week. Working in the intensive care unit was associated with higher odds of experiencing foot problems within the last seven days, foot problems in the last 12 months and foot problems that impaired activity in the last 12 months. Changes in foot discomfort experienced within a day, showed large individual variability. Fifteen of the forty nurses experienced moderate/substantial foot discomfort at the end of their shift (VAS 25+mm). Analysis of the association between risk factors and moderate/substantial foot discomfort revealed that foot discomfort was less likely for nurses who were older, had greater BMI or had lower foot arches, as indicated by higher arch index scores. The nurses’ postural sway decreased over the course of the work shift, suggesting improved body balance by the end of the day. These findings were unexpected. Further clinical studies examining individual nurses on several work shifts are needed to confirm these results, particularly due to the small sample size and the single measurement occasion. There are more than 280,000 nurses registered to practice in Australia. The nursing workforce is ageing and the prevalence of foot problems will increase. If the prevalence estimates from this study are extrapolated to the profession generally, more than 70,000 hospital nurses have experienced substantial foot discomfort and 25-30,000 hospital nurses have been limited in their activity due to foot problems during the last 12 months. Nurses with underlying foot conditions were more likely to report having foot problems at work. Strategies to prevent or manage foot conditions exist and they should be disseminated to nurses. Obesity is a significant risk factor for foot and ankle MSDs and these nurses may need particular assistance to manage foot problems. The risk of foot problems for particular groups of nurses, e.g. obese nurses, may vary depending upon the location within the hospital. Further research is needed to confirm the findings of this study. Similar studies should be conducted in other occupational groups that require workers to stand for prolonged periods.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.
Resumo:
Despite recent developments in fixed-film combined biological nutrients removal (BNR) technology; fixed-film systems (i.e., biofilters), are still at the early stages of development and their application has been limited to a few laboratory-scale experiments. Achieving enhanced biological phosphorus removal in fixed-film systems requires exposing the micro-organisms and the waste stream to alternating anaerobic/aerobic or anaerobic/anoxic conditions in cycles. The concept of cycle duration (CD) as a process control parameter is unique to fixed-film BNR systems, has not been previously investigated, and can be used to optimise the performance of such systems. The CD refers to the elapsed time before the biomass is re-exposed to the same environmental conditions in cycles. Fixed-film systems offer many advantages over suspended growth systems such as reduced operating costs, simplicity of operation, absence of sludge recycling problems, and compactness. The control of nutrient discharges to water bodies, improves water quality, fish production, and allow water reuse. The main objective of this study was to develop a fundamental understanding of the effect of CD on the transformations of nutrients in fixed-film biofilter systems subjected to alternating aeration I no-aeration cycles A fixed-film biofilter system consisting of three up-flow biofilters connected in series was developed and tested. The first and third biofilters were operated in a cyclic mode in which the biomass was subjected to aeration/no-aeration cycles. The influent wastewater was simulated aquaculture whose composition was based on actual water quality parameters of aquacuture wastewater from a prawn grow-out facility. The influent contained 8.5 - 9:3 mg!L a111monia-N, 8.5- 8.7 mg/L phosphate-P, and 45- 50 mg!L acetate. Two independent studies were conducted at two biofiltration rates to evaluate and confirm the effect of CD on nutrient transformations in the biofilter system for application in aquaculture: A third study was conducted to enhance denitrification in the system using an external carbon- source at a rate varying from 0-24 ml/min. The CD was varied in the range of0.25- 120 hours for the first two studies and fixed at 12 hours for the third study. This study identified the CD as an important process control parameter that can be used to optimise the performance of full-scale fixed-film systems for BNR which represents a novel contribution in this field of research. The CD resulted in environmental conditions that inhibited or enhanced nutrient transformations. The effect of CD on BNR in fixed-film systems in terms of phosphorus biomass saturation and depletion has been established. Short CDs did not permit the establishment of anaerobic activity in the un-aerated biofilter and, thus, inhibited phosphorus release. Long CDs resulted in extended anaerobic activity and, thus, resulted in active phosphorus release. Long CDs, however, resulted in depleting the biomass phosphorus reservoir in the releasing biofilter and saturating the biomass phosphorus reservoir in the up-taking biofilter in the cycle. This phosphorus biomass saturation/depletion phenomenon imposes a practical limit on how short or long the CD can be. The length of the CD should be somewhere just before saturation or depletion occur and for the system tested, the optimal CD was 12 hours for the biofiltration rates tested. The system achieved limited net phosphorus removal due to the limited sludge wasting and lack of external carbon supply during phosphorus uptake. The phosphorus saturation and depletion reflected the need to extract phosphorus from the phosphorus-rich micro-organisms, for example, through back-washing. The major challenges of achieving phosphorus removal in the system included: (I) overcoming the deterioration in the performance of the system during the transition period following the start of each new cycle; and (2) wasting excess phosphorus-saturated biomass following the aeration cycle. Denitrification occurred in poorly aerated sections of the third biofilter and generally declined as the CD increased and as the time progressed in the individual cycle. Denitrification and phosphorus uptake were supplied by an internal organic carbon source, and the addition of an external carbon source (acetate) to the third biofilter resulted in improved denitrification efficiency in the system from 18.4 without supplemental carbon to 88.7% when the carbon dose reached 24 mL/min The removal of TOC and nitrification improved as the CD increased, as a result of the reduction in the frequency of transition periods between the cycles. A conceptual design of an effective fixed-film BNR biofilter system for the treatment of the influent simulated aquaculture wastewater was proposed based on the findings of the study.
Resumo:
Since the 1960s, the value relevance of accounting information has been an important topic in accounting research. The value relevance research provides evidence as to whether accounting numbers relate to corporate value in a predicted manner (Beaver, 2002). Such research is not only important for investors but also provides useful insights into accounting reporting effectiveness for standard setters and other users. Both the quality of accounting standards used and the effectiveness associated with implementing these standards are fundamental prerequisites for high value relevance (Hellstrom, 2006). However, while the literature comprehensively documents the value relevance of accounting information in developed markets, little attention has been given to emerging markets where the quality of accounting standards and their enforcement are questionable. Moreover, there is currently no known research that explores the association between level of compliance with International Financial Reporting Standards (IFRS) and the value relevance of accounting information. Motivated by the lack of research on the value relevance of accounting information in emerging markets and the unique institutional setting in Kuwait, this study has three objectives. First, it investigates the extent of compliance with IFRS with respect to firms listed on the Kuwait Stock Exchange (KSE). Second, it examines the value relevance of accounting information produced by KSE-listed firms over the 1995 to 2006 period. The third objective links the first two and explores the association between the level of compliance with IFRS and the value relevance of accounting information to market participants. Since it is among the first countries to adopt IFRS, Kuwait provides an ideal setting in which to explore these objectives. In addition, the Kuwaiti accounting environment provides an interesting regulatory context in which each KSE-listed firm is required to appoint at least two external auditors from separate auditing firms. Based on the research objectives, five research questions (RQs) are addressed. RQ1 and RQ2 aim to determine the extent to which KSE-listed firms comply with IFRS and factors contributing to variations in compliance levels. These factors include firm attributes (firm age, leverage, size, profitability, liquidity), the number of brand name (Big-4) auditing firms auditing a firm’s financial statements, and industry categorization. RQ3 and RQ4 address the value relevance of IFRS-based financial statements to investors. RQ5 addresses whether the level of compliance with IFRS contributes to the value relevance of accounting information provided to investors. Based on the potential improvement in value relevance from adopting and complying with IFRS, it is predicted that the higher the level of compliance with IFRS, the greater the value relevance of book values and earnings. The research design of the study consists of two parts. First, in accordance with prior disclosure research, the level of compliance with mandatory IFRS is examined using a disclosure index. Second, the value relevance of financial statement information, specifically, earnings and book value, is examined empirically using two valuation models: price and returns models. The combined empirical evidence that results from the application of both models provides comprehensive insights into value relevance of accounting information in an emerging market setting. Consistent with expectations, the results show the average level of compliance with IFRS mandatory disclosures for all KSE-listed firms in 2006 was 72.6 percent; thus, indicating KSE-listed firms generally did not fully comply with all requirements. Significant variations in the extent of compliance are observed among firms and across accounting standards. As predicted, older, highly leveraged, larger, and profitable KSE-listed firms are more likely to comply with IFRS required disclosures. Interestingly, significant differences in the level of compliance are observed across the three possible auditor combinations of two Big-4, two non-Big 4, and mixed audit firm types. The results for the price and returns models provide evidence that earnings and book values are significant factors in the valuation of KSE-listed firms during the 1995 to 2006 period. However, the results show that the value relevance of earnings and book values decreased significantly during that period, suggesting that investors rely less on financial statements, possibly due to the increase in the available non-financial statement sources. Notwithstanding this decline, a significant association is observed between the level of compliance with IFRS and the value relevance of earnings and book value to KSE investors. The findings make several important contributions. First, they raise concerns about the effectiveness of the regulatory body that oversees compliance with IFRS in Kuwait. Second, they challenge the effectiveness of the two-auditor requirement in promoting compliance with regulations as well as the associated cost-benefit of this requirement for firms. Third, they provide the first known empirical evidence linking the level of IFRS compliance with the value relevance of financial statement information. Finally, the findings are relevant for standard setters and for their current review of KSE regulations. In particular, they highlight the importance of establishing and maintaining adequate monitoring and enforcement mechanisms to ensure compliance with accounting standards. In addition, the finding that stricter compliance with IFRS improves the value relevance of accounting information highlights the importance of full compliance with IFRS and not just mere adoption.
Resumo:
The material presented in this thesis may be viewed as comprising two key parts, the first part concerns batch cryptography specifically, whilst the second deals with how this form of cryptography may be applied to security related applications such as electronic cash for improving efficiency of the protocols. The objective of batch cryptography is to devise more efficient primitive cryptographic protocols. In general, these primitives make use of some property such as homomorphism to perform a computationally expensive operation on a collective input set. The idea is to amortise an expensive operation, such as modular exponentiation, over the input. Most of the research work in this field has concentrated on its employment as a batch verifier of digital signatures. It is shown that several new attacks may be launched against these published schemes as some weaknesses are exposed. Another common use of batch cryptography is the simultaneous generation of digital signatures. There is significantly less previous work on this area, and the present schemes have some limited use in practical applications. Several new batch signatures schemes are introduced that improve upon the existing techniques and some practical uses are illustrated. Electronic cash is a technology that demands complex protocols in order to furnish several security properties. These typically include anonymity, traceability of a double spender, and off-line payment features. Presently, the most efficient schemes make use of coin divisibility to withdraw one large financial amount that may be progressively spent with one or more merchants. Several new cash schemes are introduced here that make use of batch cryptography for improving the withdrawal, payment, and deposit of electronic coins. The devised schemes apply both to the batch signature and verification techniques introduced, demonstrating improved performance over the contemporary divisible based structures. The solutions also provide an alternative paradigm for the construction of electronic cash systems. Whilst electronic cash is used as the vehicle for demonstrating the relevance of batch cryptography to security related applications, the applicability of the techniques introduced extends well beyond this.
Resumo:
Artificial neural network (ANN) learning methods provide a robust and non-linear approach to approximating the target function for many classification, regression and clustering problems. ANNs have demonstrated good predictive performance in a wide variety of practical problems. However, there are strong arguments as to why ANNs are not sufficient for the general representation of knowledge. The arguments are the poor comprehensibility of the learned ANN, and the inability to represent explanation structures. The overall objective of this thesis is to address these issues by: (1) explanation of the decision process in ANNs in the form of symbolic rules (predicate rules with variables); and (2) provision of explanatory capability by mapping the general conceptual knowledge that is learned by the neural networks into a knowledge base to be used in a rule-based reasoning system. A multi-stage methodology GYAN is developed and evaluated for the task of extracting knowledge from the trained ANNs. The extracted knowledge is represented in the form of restricted first-order logic rules, and subsequently allows user interaction by interfacing with a knowledge based reasoner. The performance of GYAN is demonstrated using a number of real world and artificial data sets. The empirical results demonstrate that: (1) an equivalent symbolic interpretation is derived describing the overall behaviour of the ANN with high accuracy and fidelity, and (2) a concise explanation is given (in terms of rules, facts and predicates activated in a reasoning episode) as to why a particular instance is being classified into a certain category.
Resumo:
The human-technology nexus is a strong focus of Information Systems (IS) research; however, very few studies have explored this phenomenon in anaesthesia. Anaesthesia has a long history of adoption of technological artifacts, ranging from early apparatus to present-day information systems such as electronic monitoring and pulse oximetry. This prevalence of technology in modern anaesthesia and the rich human-technology relationship provides a fertile empirical setting for IS research. This study employed a grounded theory approach that began with a broad initial guiding question and, through simultaneous data collection and analysis, uncovered a core category of technology appropriation. This emergent basic social process captures a central activity of anaesthestists and is supported by three major concepts: knowledge-directed medicine, complementary artifacts and culture of anaesthesia. The outcomes of this study are: (1) a substantive theory that integrates the aforementioned concepts and pertains to the research setting of anaesthesia and (2) a formal theory, which further develops the core category of appropriation from anaesthesia-specific to a broader, more general perspective. These outcomes fulfill the objective of a grounded theory study, being the formation of theory that describes and explains observed patterns in the empirical field. In generalizing the notion of appropriation, the formal theory is developed using the theories of Karl Marx. This Marxian model of technology appropriation is a three-tiered theoretical lens that examines appropriation behaviours at a highly abstract level, connecting the stages of natural, species and social being to the transition of a technology-as-artifact to a technology-in-use via the processes of perception, orientation and realization. The contributions of this research are two-fold: (1) the substantive model contributes to practice by providing a model that describes and explains the human-technology nexus in anaesthesia, and thereby offers potential predictive capabilities for designers and administrators to optimize future appropriations of new anaesthetic technological artifacts; and (2) the formal model contributes to research by drawing attention to the philosophical foundations of appropriation in the work of Marx, and subsequently expanding the current understanding of contemporary IS theories of adoption and appropriation.
Resumo:
Acute lower respiratory tract infections (ALRTIs) are a common cause of morbidity and mortality among children under 5 years of age and are found worldwide, with pneumonia as the most severe manifestation. Although the incidence of severe disease varies both between individuals and countries, there is still no clear understanding of what causes this variation. Studies of community-acquired pneumonia (CAP) have traditionally not focused on viral causes of disease due to a paucity of diagnostic tools. However, with the emergence of molecular techniques, it is now known that viruses outnumber bacteria as the etiological agents of childhood CAP, especially in children under 2 years of age. The main objective of this study was to investigate viruses contributing to disease severity in cases of childhood ALRTI, using a two year cohort study following 2014 infants and children enrolled in Bandung, Indonesia. A total of 352 nasopharyngeal washes collected from 256 paediatric ALRTI patients were used for analysis. A subset of samples was screened using a novel microarray pathogen detection method that identified respiratory syncytial virus (RSV), human metapneumovirus (hMPV) and human rhinovirus (HRV) in the samples. Real-time RT-PCR was used both for confirming and quantifying viruses found in the nasopharyngeal samples. Viral copy numbers were determined and normalised to the numbers of human cells collected with the use of 18S rRNA. Molecular epidemiology was performed for RSV A and hMPV using sequences to the glycoprotein gene and nucleoprotein gene respectively, to determine genotypes circulating in this Indonesian paediatric cohort. This study found that HRV (119/352; 33.8%) was the most common virus detected as the cause of respiratory tract infections in this cohort, followed by the viral pathogens RSV A (73/352; 20.7%), hMPV (30/352; 8.5%) and RSV B (12/352; 3.4%). Co-infections of more than two viruses were detected in 31 episodes (defined as an infection which occurred more than two weeks apart), accounting for 8.8% of the 352 samples tested or 15.4% of the 201 episodes with at least one virus detected. RSV A genotypes circulating in this population were predominantly GA2, GA5 and GA7, while hMPV genotypes circulating were mainly A2a (27/30; 90.0%), B2 (2/30; 6.7%) and A1 (1/30; 3.3%). This study found no evidence of disease severity associated either with a specific virus or viral strain, or with viral load. However, this study did find a significant association with co-infection of RSV A and HRV with severe disease (P = 0.006), suggesting that this may be a novel cause of severe disease.
Resumo:
Bone generation by autogenous cell transplantation in combination with a biodegradable scaffold is one of the most promising techniques being developed in craniofacial surgery. The objective of this combined in vitro and in vivo study was to evaluate the morphology and osteogenic differentiation of bone marrow derived mesenchymal progenitor cells and calvarial osteoblasts in a two-dimensional (2-D) and three-dimensional (3-D) culture environment (Part I of this study) and their potential in combination with a biodegradable scaffold to reconstruct critical-size calvarial defects in an autologous animal model [Part II of this study; see Schantz, J.T., et al. Tissue Eng. 2003;9(Suppl. 1):S-127-S-139; this issue]. New Zealand White rabbits were used to isolate osteoblasts from calvarial bone chips and bone marrow stromal cells from iliac crest bone marrow aspirates. Multilineage differentiation potential was evaluated in a 2-D culture setting. After amplification, the cells were seeded within a fibrin matrix into a 3-D polycaprolactone (PCL) scaffold system. The constructs were cultured for up to 3 weeks in vitro and assayed for cell attachment and proliferation using phase-contrast light, confocal laser, and scanning electron microscopy and the MTS cell metabolic assay. Osteogenic differentiation was analyzed by determining the expression of alkaline phosphatase (ALP) and osteocalcin. The bone marrow-derived progenitor cells demonstrated the potential to be induced to the osteogenic, adipogenic, and chondrogenic pathways. In a 3-D environment, cell-seeded PCL scaffolds evaluated by confocal laser microscopy revealed continuous cell proliferation and homogeneous cell distribution within the PCL scaffolds. On osteogenic induction mesenchymal progenitor cells (12 U/L) produce significantly higher (p < 0.05) ALP activity than do osteoblasts (2 U/L); however, no significant differences were found in osteocalcin expression. In conclusion, this study showed that the combination of a mechanically stable synthetic framework (PCL scaffolds) and a biomimetic hydrogel (fibrin glue) provides a potential matrix for bone tissue-engineering applications. Comparison of osteogenic differentiation between the two mesenchymal cell sources revealed a similar pattern.
Resumo:
BACKGROUND Parenting-skills training may be an effective age-appropriate child behavior-modification strategy to assist parents in addressing childhood overweight. OBJECTIVE Our goal was to evaluate the relative effectiveness of parenting-skills training as a key strategy for the treatment of overweight children. DESIGN The design consisted of an assessor-blinded, randomized, controlled trial involving 111 (64% female) overweight, prepubertal children 6 to 9 years of age randomly assigned to parenting-skills training plus intensive lifestyle education, parenting-skills training alone, or a 12-month wait-listed control. Height, BMI, and waist-circumference z score and metabolic profile were assessed at baseline, 6 months, and 12 months (intention to treat). RESULTS After 12 months, the BMI z score was reduced by ∼10% with parenting-skills training plus intensive lifestyle education versus ∼5% with parenting-skills training alone or wait-listing for intervention. Waist-circumference z score fell over 12 months in both intervention groups but not in the control group. There was a significant gender effect, with greater reduction in BMI and waist-circumference z scores in boys compared with girls. CONCLUSION Parenting-skills training combined with promoting a healthy family lifestyle may be an effective approach to weight management in prepubertal children, particularly boys. Future studies should be powered to allow gender subanalysis.