145 resultados para AK4-225-TR
Resumo:
Background: Malaria rapid diagnostic tests (RDTs) are increasingly used by remote health personnel with minimal training in laboratory techniques. RDTs must, therefore, be as simple, safe and reliable as possible. Transfer of blood from the patient to the RDT is critical to safety and accuracy, and poses a significant challenge to many users. Blood transfer devices were evaluated for accuracy and precision of volume transferred, safety and ease of use, to identify the most appropriate devices for use with RDTs in routine clinical care. Methods: Five devices, a loop, straw-pipette, calibrated pipette, glass capillary tube, and a new inverted cup device, were evaluated in Nigeria, the Philippines and Uganda. The 227 participating health workers used each device to transfer blood from a simulated finger-prick site to filter paper. For each transfer, the number of attempts required to collect and deposit blood and any spilling of blood during transfer were recorded. Perceptions of ease of use and safety of each device were recorded for each participant. Blood volume transferred was calculated from the area of blood spots deposited on filter paper. Results: The overall mean volumes transferred by devices differed significantly from the target volume of 5 microliters (p < 0.001). The inverted cup (4.6 microliters) most closely approximated the target volume. The glass capillary was excluded from volume analysis as the estimation method used is not compatible with this device. The calibrated pipette accounted for the largest proportion of blood exposures (23/225, 10%); exposures ranged from 2% to 6% for the other four devices. The inverted cup was considered easiest to use in blood collection (206/ 226, 91%); the straw-pipette and calibrated pipette were rated lowest (143/225 [64%] and 135/225 [60%] respectively). Overall, the inverted cup was the most preferred device (72%, 163/227), followed by the loop (61%, 138/227). Conclusions: The performance of blood transfer devices varied in this evaluation of accuracy, blood safety, ease of use, and user preference. The inverted cup design achieved the highest overall performance, while the loop also performed well. These findings have relevance for any point-of-care diagnostics that require blood sampling.
Resumo:
The work presented in this report is aimed to implement a cost-effective offline mission path planner for aerial inspection tasks of large linear infrastructures. Like most real-world optimisation problems, mission path planning involves a number of objectives which ideally should be minimised simultaneously. Understandably, the objectives of a practical optimisation problem are conflicting each other and the minimisation of one of them necessarily implies the impossibility to minimise the other ones. This leads to the need to find a set of optimal solutions for the problem; once such a set of available options is produced, the mission planning problem is reduced to a decision making problem for the mission specialists, who will choose the solution which best fit the requirements of the mission. The goal of this work is then to develop a Multi-Objective optimisation tool able to provide the mission specialists a set of optimal solutions for the inspection task amongst which the final trajectory will be chosen, given the environment data, the mission requirements and the definition of the objectives to minimise. All the possible optimal solutions of a Multi-Objective optimisation problem are said to form the Pareto-optimal front of the problem. For any of the Pareto-optimal solutions, it is impossible to improve one objective without worsening at least another one. Amongst a set of Pareto-optimal solutions, no solution is absolutely better than another and the final choice must be a trade-off of the objectives of the problem. Multi-Objective Evolutionary Algorithms (MOEAs) are recognised to be a convenient method for exploring the Pareto-optimal front of Multi-Objective optimization problems. Their efficiency is due to their parallelism architecture which allows to find several optimal solutions at each time
Resumo:
Coffee is one of the most widely consumed beverages in the world and has a number of potential health benefits. Coffee may influence energy expenditure and energy intake, which in turn may affect body weight. However, the influence of coffee and its constituents – particularly caffeine – on appetite remains largely unexplored. The objective of this study was to examine the impact of coffee consumption (with and without caffeine) on appetite sensations, energy intake, gastric emptying, and plasma glucose between breakfast and lunch meals. In a double-blind, randomised crossover design. Participants (n = 12, 9 women; Mean ± SD age and BMI: 26.3 ± 6.3 y and 22.7 ± 2.2 kg•m−2) completed 4 trials: placebo (PLA), decaffeinated coffee (DECAF), caffeine (CAF), and caffeine with decaffeinated coffee (COF). Participants were given a standardised breakfast labelled with 13C-octanoic acid and 225 mL of treatment beverage and a capsule containing either caffeine or placebo. Two hours later, another 225 mL of the treatment beverage and capsule was administered. Four and a half hours after breakfast, participants were given access to an ad libitum meal for determination of energy intake. Between meals, participants provided exhaled breath samples for determination of gastric emptying; venous blood and appetite sensations. Energy intake was not significantly different between the trials (Means ± SD, p > 0.05; Placebo: 2118 ± 663 kJ; Decaf: 2128 ± 739 kJ; Caffeine: 2287 ± 649 kJ; Coffee: 2016 ± 750 kJ); Other than main effects of time (p < 0.05), no significant differences were detected for appetite sensations or plasma glucose between treatments (p > 0.05). Gastric emptying was not significantly different across trials (p > 0.05). No significant effects of decaffeinated coffee, caffeine or their combination were detected. However, the consumption of caffeine and/or coffee for regulation of energy balance over longer periods of time warrant further investigation.
Resumo:
If DNA is the information of life, then proteins are the machines of life — but they must be assembled and correctly folded to function. A key step in the protein-folding pathway is the introduction of disulphide bonds between cysteine residues in a process called oxidative protein folding. Many bacteria use an oxidative protein-folding machinery to assemble proteins that are essential for cell integrity and to produce virulence factors. Although our current knowledge of this machinery stems largely from Escherichia coli K-12, this view must now be adjusted to encompass the wider range of disulphide catalytic systems present in bacteria.
Resumo:
A straightforward procedure for the acid digestion of geological samples with SiO2 concentrations ranging between about 40 to 80%, is described. A powdered sample (200 mesh) of 500 mg was used and fused with 1000 mg spectroflux at about 1000 degreesC in a platinum crucible. The molten was subsequently digested in an aqueous solution of HNO3 at 100 degreesC. Several systematic digestion procedures were followed using various concentrations of HNO3. It was found that a relationship could be established between the dissolution-time and acid concentration. For an acid concentration of 15% an optimum dissolution-time of under 4 min was recorded. To verify that the dissolutions were complete, they were subjected to rigorous quality control tests. The turbidity and viscosity were examined at different intervals and the results were compared with that of deionised water. No significant change in either parameter was observed. The shelf-life of each solution lasted for several months, after which time polymeric silicic acid formed in some solutions, resulting in the presence of a gelatinous solid. The method is cost effective and is clearly well suited for routine applications on a small scale, especially in laboratories in developing countries. ICP-MS was applied to the determination of 13 Rare Earth Elements and Hf in a set of 107 archaeological samples subjected to the above digestion procedure. The distribution of these elements was examined and the possibility of using the REE's for provenance studies is discussed.
Resumo:
This article examines the new model for corporate officer liability under section 144 of the Occupational Health and Safety Act 2004 (Vic), and explores the extent to which this might effectively extend responsibility for OHS offences to members of corporate groups, such as holding companies. In doing so, the authors canvass the failure of corporate law to impose such obligations on corporate officers in general, and on holding companies as shadow officers. It is argued that provisions such as section 144 of the Victorian Act should be included in all OHS legislation.
Resumo:
During the 18th and 19th centuries, prostitution came to be understood as a potentially disruptive element in the management of society. New forms of social control developed that sought to transform the souls of prostitutes to better control their bodies. Institutions for managing prostitutes, such as Magdalen Homes and lock hospitals, were introduced or increased in number throughout the British Empire, North America, and Western Europe. Often these institutions had as their stated objective the physical purification and moral reform of prostitutes, appearing to make a dramatic break with earlier methods of social control that had relied on practices of physical punishment and spatial segregation. Emergent institutions for the social control of prostitutes used a regimen of religious training, hard labor, and medical expertise. The objective of the Magdalen Home was not to punish sin but to absolve it, while the function of the lock hospital was not simply to confine the ill, but to confine the ill to "cure" them. The role of these institutions was not only symbolic, mirroring in some way the operation of earlier forms of social control, but was also practical and transformative. The mass institutionalization of prostitutes that occurred during the 18th and 19th centuries produced and emphasized sexual, class, and gender boundaries, grounded in the broad distinction between "pure" and "impure" women. Because of its association with sin, prostitution before the 18th century had been constructed as a religious problem relating to salvation and penitence. Throughout Western Europe during the Middle Ages, prostitutes, like the medieval leper and the Jew, were subject to restrictions designed to distinguish and isolate them from other members of their communities. The repression of prostitution during the Middle Ages was neither systematic nor highly organized, although it reinforced the image of the prostitute as sinful "other".
Resumo:
Background and Objectives: Although depression is a commonly occurring mental illness, research concerning strategies for early detection and prophylaxis has not until now focused on the possible utility of measures of Emotional Intelligence (EI) as a potential predictive factor. The current study aimed to investigate the relationship between EI and a clinical diagnosis of depression in a cohort of adults. Methods: Sixty-two patients (59.70% female) with a DSM-IV-TR diagnosis of a major affective disorder and 39 aged matched controls (56.40% female) completed self-report instruments assessing EI and depression in a cross-sectional study. Results: Significant associations were observed between severity of depression and the EI dimensions of Emotional Management (r = -0.56) and Emotional Control (r = -0.62). The results show a reduced social involvement, an increased prior institutionalization and an increased incidence of "Schizophrenic Psychosis" and "Abnormal Personalities" in the sub-group of repeated admissions. Conclusions: Measures of EI may have predictive value in terms of early identification of those at risk for developing depression. The current study points to the potential value of conducting further studies of a prospective nature.
Resumo:
Background Depressive disorders were a leading cause of burden in the Global Burden of Disease (GBD) 1990 and 2000 studies. Here, we analyze the burden of depressive disorders in GBD 2010 and present severity proportions, burden by country, region, age, sex, and year, as well as burden of depressive disorders as a risk factor for suicide and ischemic heart disease. Methods and Findings Burden was calculated for major depressive disorder (MDD) and dysthymia. A systematic review of epidemiological data was conducted. The data were pooled using a Bayesian meta-regression. Disability weights from population survey data quantified the severity of health loss from depressive disorders. These weights were used to calculate years lived with disability (YLDs) and disability adjusted life years (DALYs). Separate DALYs were estimated for suicide and ischemic heart disease attributable to depressive disorders.Depressive disorders were the second leading cause of YLDs in 2010. MDD accounted for 8.2% (5.9%-10.8%) of global YLDs and dysthymia for 1.4% (0.9%-2.0%). Depressive disorders were a leading cause of DALYs even though no mortality was attributed to them as the underlying cause. MDD accounted for 2.5% (1.9%-3.2%) of global DALYs and dysthymia for 0.5% (0.3%-0.6%). There was more regional variation in burden for MDD than for dysthymia; with higher estimates in females, and adults of working age. Whilst burden increased by 37.5% between 1990 and 2010, this was due to population growth and ageing. MDD explained 16 million suicide DALYs and almost 4 million ischemic heart disease DALYs. This attributable burden would increase the overall burden of depressive disorders from 3.0% (2.2%-3.8%) to 3.8% (3.0%-4.7%) of global DALYs. Conclusions GBD 2010 identified depressive disorders as a leading cause of burden. MDD was also a contributor of burden allocated to suicide and ischemic heart disease. These findings emphasize the importance of including depressive disorders as a public-health priority and implementing cost-effective interventions to reduce its burden.Please see later in the article for the Editors' Summary.
Resumo:
Objective To examine the clinical utility of the Cornell Scale for Depression in Dementia (CSDD) in nursing homes. Setting 14 nursing homes in Sydney and Brisbane, Australia. Participants 92 residents with a mean age of 85 years. Measurements Consenting residents were assessed by care staff for depression using the CSDD as part of their routine assessment. Specialist clinicians conducted assessment of depression using the Semi-structured Clinical Diagnostic Interview for DSM-IV-TR Axis I Disorders for residents without dementia or the Provisional Diagnostic Criteria for Depression in Alzheimer Disease for residents with dementia to establish expert clinical diagnoses of depression. The diagnostic performance of the staff completed CSDD was analyzed against expert diagnosis using receiver operating characteristic (ROC) curves. Results The CSDD showed low diagnostic accuracy, with areas under the ROC curve being 0.69, 0.68 and 0.70 for the total sample, residents with dementia and residents without dementia, respectively. At the standard CSDD cutoff score, the sensitivity and specificity were 71% and 59% for the total sample, 69% and 57% for residents with dementia, and 75% and 61% for residents without dementia. The Youden index (for optimizing cut-points) suggested different depression cutoff scores for residents with and without dementia. Conclusion When administered by nursing home staff the clinical utility of the CSDD is highly questionable in identifying depression. The complexity of the scale, the time required for collecting relevant information, and staff skills and knowledge of assessing depression in older people must be considered when using the CSDD in nursing homes.
Resumo:
Background: The existence of an ecstasy dependence syndrome is controversial. We examined whether the acute after-effects of ecstasy use (i.e., the “come-down”) falsely lead to the identification of ecstasy withdrawal and the subsequent diagnosis of ecstasy dependence. Methods: The Structured Clinical Interview for DSM-IV-TR Disorders: Research Version (SCID-RV) was administered to 214 Australian ecstasy users. Ecstasy withdrawal was operationalized in three contrasting ways: (i) as per DSM-IV criteria; (ii) as the expected after effects of ecstasy (a regular come-down); or (iii) as a substantially greater or longer come-down than on first use (intense come-down). These definitions were validated against frequency of ecstasy use, readiness to change and ability to resist the urge to use ecstasy. Confirmatory factor analyses were used to see how they aligned with the overall dependence syndrome. Results: Come-down symptoms increased the prevalence of withdrawal from 1% (DSM-IV criterion) to 11% (intense come-downs) and 75% (regular come-downs). Past year ecstasy dependence remained at 31% when including the DSM-IV withdrawal criteria and was 32% with intense come-downs, but increased to 45% with regular come-downs. Intense come-downs were associated with lower ability to resist ecstasy use and loaded positively on the dependence syndrome. Regular come-downs did not load positively on the ecstasy dependence syndrome and were not related to other indices of dependence. Conclusion: The acute after-effects of ecstasy should be excluded when assessing ecstasy withdrawal as they can lead to a false diagnosis of ecstasy dependence. Worsening of the ecstasy come-down may be a marker for dependence.
Resumo:
In the general population it is evident that parent feeding practices can directly shape a child’s life long dietary intake. Young children undergoing childhood cancer treatment may experience feeding difficulties and limited food intake, due to the inherent side effects of their anti-cancer treatment. What is not clear is how these treatment side effects are influencing the parent–child feeding relationship during anti-cancer treatment. This retrospective qualitative study collected telephone based interview data from 38 parents of childhood cancer patients who had recently completed cancer treatment (child’s mean age: 6.98 years). Parents described a range of treatment side effects that impacted on their child’s ability to eat, often resulting in weight loss. Sixty-one percent of parents (n = 23) reported high levels of stress in regard to their child’s eating and weight loss during treatment. Parents reported stress, feelings of helplessness, and conflict and/or tension between parent and the child during feeding/eating interactions. Parents described using both positive and negative feeding practices, such as: pressuring their child to eat, threatening the insertion of a nasogastric feeding tube, encouraging the child to eat and providing home cooked meals in hospital. Results indicated that parent stress may lead to the use of coping strategies such as positive or negative feeding practices to entice their child to eat during cancer treatment. Future research is recommended to determine the implication of parent feeding practice on the long term diet quality and food preferences of childhood cancer survivors.
Co-optimisation of indoor environmental quality and energy consumption within urban office buildings
Resumo:
This study aimed to develop a multi-component model that can be used to maximise indoor environmental quality inside mechanically ventilated office buildings, while minimising energy usage. The integrated model, which was developed and validated from fieldwork data, was employed to assess the potential improvement of indoor air quality and energy saving under different ventilation conditions in typical air-conditioned office buildings in the subtropical city of Brisbane, Australia. When operating the ventilation system under predicted optimal conditions of indoor environmental quality and energy conservation and using outdoor air filtration, average indoor particle number (PN) concentration decreased by as much as 77%, while indoor CO2 concentration and energy consumption were not significantly different compared to the normal summer time operating conditions. Benefits of operating the system with this algorithm were most pronounced during the Brisbane’s mild winter. In terms of indoor air quality, average indoor PN and CO2 concentrations decreased by 48% and 24%, respectively, while potential energy savings due to free cooling went as high as 108% of the normal winter time operating conditions. The application of such a model to the operation of ventilation systems can help to significantly improve indoor air quality and energy conservation in air-conditioned office buildings.