990 resultados para disclosure versus recognition
Resumo:
Advertising investment and audience figures indicate that television continues to lead as a mass advertising medium. However, its effectiveness is questioned due to problems such as zapping, saturation and audience fragmentation. This has favoured the development of non-conventional advertising formats. This study provides empirical evidence for the theoretical development. This investigation analyzes the recall generated by four non-conventional advertising formats in a real environment: short programme (branded content), television sponsorship, internal and external telepromotion versus the more conventional spot. The methodology employed has integrated secondary data with primary data from computer assisted telephone interviewing (CATI) were performed ad-hoc on a sample of 2000 individuals, aged 16 to 65, representative of the total television audience. Our findings show that non-conventional advertising formats are more effective at a cognitive level, as they generate higher levels of both unaided and aided recall, in all analyzed formats when compared to the spot.
Resumo:
Problem addressed Wrist-worn accelerometers are associated with greater compliance. However, validated algorithms for predicting activity type from wrist-worn accelerometer data are lacking. This study compared the activity recognition rates of an activity classifier trained on acceleration signal collected on the wrist and hip. Methodology 52 children and adolescents (mean age 13.7 +/- 3.1 year) completed 12 activity trials that were categorized into 7 activity classes: lying down, sitting, standing, walking, running, basketball, and dancing. During each trial, participants wore an ActiGraph GT3X+ tri-axial accelerometer on the right hip and the non-dominant wrist. Features were extracted from 10-s windows and inputted into a regularized logistic regression model using R (Glmnet + L1). Results Classification accuracy for the hip and wrist was 91.0% +/- 3.1% and 88.4% +/- 3.0%, respectively. The hip model exhibited excellent classification accuracy for sitting (91.3%), standing (95.8%), walking (95.8%), and running (96.8%); acceptable classification accuracy for lying down (88.3%) and basketball (81.9%); and modest accuracy for dance (64.1%). The wrist model exhibited excellent classification accuracy for sitting (93.0%), standing (91.7%), and walking (95.8%); acceptable classification accuracy for basketball (86.0%); and modest accuracy for running (78.8%), lying down (74.6%) and dance (69.4%). Potential Impact Both the hip and wrist algorithms achieved acceptable classification accuracy, allowing researchers to use either placement for activity recognition.
Resumo:
Older adults often demonstrate higher levels of false recognition than do younger adults. However, in experiments using novel shapes without preexisting semantic representations, this age-related elevation in false recognition was found to be greatly attenuated. Two experiments tested a semantic categorization account of these findings, examining whether older adults show especially heightened false recognition if the stimuli have preexisting semantic representations, such that semantic category information attenuates or truncates the encoding or retrieval of item-specific perceptual information. In Experiment 1, ambiguous shapes were presented with or without disambiguating semantic labels. Older adults showed higher false recognition when labels were present but not when labels were never presented. In Experiment 2, older adults showed higher false recognition for concrete but not abstract objects. The semantic categorization account was supported.
Resumo:
Background Despite the recognition of obesity in young people as a key health issue, there is limited evidence to inform health professionals regarding the most appropriate treatment options. The Eat Smart study aims to contribute to the knowledge base of effective dietary strategies for the clinical management of the obese adolescent and examine the cardiometablic effects of a reduced carbohydrate diet versus a low fat diet. Methods and design Eat Smart is a randomised controlled trial and aims to recruit 100 adolescents over a 2½ year period. Families will be invited to participate following referral by their health professional who has recommended weight management. Participants will be overweight as defined by a body mass index (BMI) greater than the 90th percentile, using CDC 2000 growth charts. An accredited 6-week psychological life skills program ‘FRIENDS for Life’, which is designed to provide behaviour change and coping skills will be undertaken prior to volunteers being randomised to group. The intervention arms include a structured reduced carbohydrate or a structured low fat dietary program based on an individualised energy prescription. The intervention will involve a series of dietetic appointments over 24 weeks. The control group will commence the dietary program of their choice after a 12 week period. Outcome measures will be assessed at baseline, week 12 and week 24. The primary outcome measure will be change in BMI z-score. A range of secondary outcome measures including body composition, lipid fractions, inflammatory markers, social and psychological measures will be measured. Discussion The chronic and difficult nature of treating the obese adolescent is increasingly recognised by clinicians and has highlighted the need for research aimed at providing effective intervention strategies, particularly for use in the tertiary setting. A structured reduced carbohydrate approach may provide a dietary pattern that some families will find more sustainable and effective than the conventional low fat dietary approach currently advocated. This study aims to investigate the acceptability and effectiveness of a structured reduced dietary carbohydrate intervention and will compare the outcomes of this approach with a structured low fat eating plan. Trial Registration: The protocol for this study is registered with the International Clinical Trials Registry (ISRCTN49438757).
Resumo:
Few studies have investigated iatrogenic outcomes from the viewpoint of patient experience. To address this anomaly, the broad aim of this research is to explore the lived experience of patient harm. Patient harm is defined as major harm to the patient, either psychosocial or physical in nature, resulting from any aspect of health care. Utilising the method of Consensual Qualitative Research (CQR), in-depth interviews are conducted with twenty-four volunteer research participants who self-report having been severely harmed by an invasive medical procedure. A standardised measure of emotional distress, the Impact of Event Scale (IES), is additionally employed for purposes of triangulation. Thematic analysis of transcript data indicate numerous findings including: (i) difficulties regarding patients‘ prior understanding of risks involved with their medical procedure; (ii) the problematic response of the health system post-procedure; (iii) multiple adverse effects upon life functioning; (iv) limited recourse options for patients; and (v) the approach desired in terms of how patient harm should be systemically handled. In addition, IES results indicate a clinically significant level of distress in the sample as a whole. To discuss findings, a cross-disciplinary approach is adopted that draws upon sociology, medicine, medical anthropology, psychology, philosophy, history, ethics, law, and political theory. Furthermore, an overall explanatory framework is proposed in terms of the master themes of power and trauma. In terms of the theme of power, a postmodernist analysis explores the politics of patient harm, particularly the dynamics surrounding the politics of knowledge (e.g., notions of subjective versus objective knowledge, informed consent, and open disclosure). This analysis suggests that patient care is not the prime function of the health system, which appears more focussed upon serving the interests of those in the upper levels of its hierarchy. In terms of the master theme of trauma, current understandings of posttraumatic stress disorder (PTSD) are critiqued, and based on data from this research as well as the international literature, a new model of trauma is proposed. This model is based upon the principle of homeostasis observed in biology, whereby within every cell or organism a state of equilibrium is sought and maintained. The proposed model identifies several bio-psychosocial markers of trauma across its three main phases. These trauma markers include: (i) a profound sense of loss; (ii) a lack of perceived control; (iii) passive trauma processing responses; (iv) an identity crisis; (v) a quest to fully understand the trauma event; (vi) a need for social validation of the traumatic experience; and (vii) posttraumatic adaption with the possibility of positive change. To further explore the master themes of power and trauma, a natural group interview is carried out at a meeting of a patient support group for arachnoiditis. Observations at this meeting and members‘ stories in general support the homeostatic model of trauma, particularly the quest to find answers in the face of distressing experience, as well as the need for social recognition of that experience. In addition, the sociopolitical response to arachnoiditis highlights how public domains of knowledge are largely constructed and controlled by vested interests. Implications of the data overall are discussed in terms of a cultural revolution being needed in health care to position core values around a prime focus upon patients as human beings.
Resumo:
Each financial year concessions, benefits and incentives are delivered to taxpayers via the tax system. These concessions, benefits and incentives, referred to as tax expenditure, differ from direct expenditure because of the recurring fiscal impact without regular scrutiny through the federal budget process. There are approximately 270 different tax expenditures existing within the current tax regime with total measured tax expenditures in the 2005-06 financial year estimated to be around $42.1 billion, increasing to $52.7 billion by 2009-10. Each year, new tax expenditures are introduced, while existing tax expenditures are modified and deleted. In recognition of some of the problems associated with tax expenditure, a Tax Expenditure Statement, as required by the Charter of Budget Honesty Act 1988, is produced annually by the Australian Federal Treasury. The Statement details the various expenditures and measures in the form of concessions, benefits and incentives provided to taxpayers by the Australian Government and calculates the tax expenditure in terms of revenue forgone. A similar approach to reporting tax expenditure, with such a report being a legal requirement, is followed by most OECD countries. The current Tax Expenditure Statement lists 270 tax expenditures and where it is able to, reports on the estimated pecuniary value of those expenditures. Apart from the annual Tax Expenditure Statement, there is very little other scrutiny of Australia’s Federal tax expenditure program. While there has been various academic analysis of tax expenditure in Australia, when compared to the North American literature, it is suggested that the Australian literature is still in its infancy. In fact, one academic author who has contributed to tax expenditure analysis recently noted that there is ‘remarkably little secondary literature which deals at any length with tax expenditures in the Australian context.’ Given this perceived gap in the secondary literature, this paper examines fundamental concept of tax expenditure and considers the role it plays in to the current tax regime as a whole, along with the effects of the introduction of new tax expenditures. In doing so, tax expenditure is contrasted with direct expenditure. An analysis of tax expenditure versus direct expenditure is already a sophisticated and comprehensive body of work stemming from the US over the last three decades. As such, the title of this paper is rather misleading. However, given the lack of analysis in Australia, it is appropriate that this paper undertakes a consideration of tax expenditure versus direct expenditure in an Australian context. Given this proposition, rather than purport to undertake a comprehensive analysis of tax expenditure which has already been done, this paper discusses the substantive considerations of any such analysis to enable further investigation into the tax expenditure regime both as a whole and into individual tax expenditure initiatives. While none of the propositions in this paper are new in a ‘tax expenditure analysis’ sense, this debate is a relatively new contribution to the Australian literature on the tax policy. Before the issues relating to tax expenditure can be determined, it is necessary to consider what is meant by ‘tax expenditure’. As such, part two if this paper defines ‘tax expenditure’. Part three determines the framework in which tax expenditure can be analysed. It is suggested that an analysis of tax expenditure must be evaluated within the framework of the design criteria of an income tax system with the key features of equity, efficiency, and simplicity. Tax expenditure analysis can then be applied to deviations from the ideal tax base. Once it is established what is meant by tax expenditure and the framework for evaluation is determined, it is possible to establish the substantive issues to be evaluated. This paper suggests that there are four broad areas worthy of investigation; economic efficiency, administrative efficiency, whether tax expenditure initiatives achieve their policy intent, and the impact on stakeholders. Given these areas of investigation, part four of this paper considers the issues relating to the economic efficiency of the tax expenditure regime, in particular, the effect on resource allocation, incentives for taxpayer behaviour and distortions created by tax expenditures. Part five examines the notion of administrative efficiency in light of the fact that most tax expenditures could simply be delivered as direct expenditures. Part six explores the notion of policy intent and considers the two questions that need to be asked; whether any tax expenditure initiative reaches its target group and whether the financial incentives are appropriate. Part seven examines the impact on stakeholders. Finally, part eight considers the future of tax expenditure analysis in Australia.
Resumo:
Facial expression is an important channel of human social communication. Facial expression recognition (FER) aims to perceive and understand emotional states of humans based on information in the face. Building robust and high performance FER systems that can work in real-world video is still a challenging task, due to the various unpredictable facial variations and complicated exterior environmental conditions, as well as the difficulty of choosing a suitable type of feature descriptor for extracting discriminative facial information. Facial variations caused by factors such as pose, age, gender, race and occlusion, can exert profound influence on the robustness, while a suitable feature descriptor largely determines the performance. Most present attention on FER has been paid to addressing variations in pose and illumination. No approach has been reported on handling face localization errors and relatively few on overcoming facial occlusions, although the significant impact of these two variations on the performance has been proved and highlighted in many previous studies. Many texture and geometric features have been previously proposed for FER. However, few comparison studies have been conducted to explore the performance differences between different features and examine the performance improvement arisen from fusion of texture and geometry, especially on data with spontaneous emotions. The majority of existing approaches are evaluated on databases with posed or induced facial expressions collected in laboratory environments, whereas little attention has been paid on recognizing naturalistic facial expressions on real-world data. This thesis investigates techniques for building robust and high performance FER systems based on a number of established feature sets. It comprises of contributions towards three main objectives: (1) Robustness to face localization errors and facial occlusions. An approach is proposed to handle face localization errors and facial occlusions using Gabor based templates. Template extraction algorithms are designed to collect a pool of local template features and template matching is then performed to covert these templates into distances, which are robust to localization errors and occlusions. (2) Improvement of performance through feature comparison, selection and fusion. A comparative framework is presented to compare the performance between different features and different feature selection algorithms, and examine the performance improvement arising from fusion of texture and geometry. The framework is evaluated for both discrete and dimensional expression recognition on spontaneous data. (3) Evaluation of performance in the context of real-world applications. A system is selected and applied into discriminating posed versus spontaneous expressions and recognizing naturalistic facial expressions. A database is collected from real-world recordings and is used to explore feature differences between standard database images and real-world images, as well as between real-world images and real-world video frames. The performance evaluations are based on the JAFFE, CK, Feedtum, NVIE, Semaine and self-collected QUT databases. The results demonstrate high robustness of the proposed approach to the simulated localization errors and occlusions. Texture and geometry have different contributions to the performance of discrete and dimensional expression recognition, as well as posed versus spontaneous emotion discrimination. These investigations provide useful insights into enhancing robustness and achieving high performance of FER systems, and putting them into real-world applications.
Resumo:
Our contemporary public sphere has seen the 'emergence of new political rituals, which are concerned with the stains of the past, with self disclosure, and with ways of remembering once taboo and traumatic events' (Misztal, 2005). A recent case of this phenomenon occurred in Australia in 2009 with the apology to the 'Forgotten Australians': a group who suffered abuse and neglect after being removed from their parents – either in Australia or in the UK - and placed in Church and State run institutions in Australia between 1930 and 1970. This campaign for recognition by a profoundly marginalized group coincides with the decade in which the opportunities of Web 2.0 were seen to be diffusing throughout different social groups, and were considered a tool for social inclusion. This paper examines the case of the Forgotten Australians as an opportunity to investigate the role of the internet in cultural trauma and public apology. As such, it adds to recent scholarship on the role of digital web based technologies in commemoration and memorials (Arthur, 2009; Haskins, 2007; Cohen and Willis, 2004), and on digital storytelling in the context of trauma (Klaebe, 2011) by locating their role in a broader and emerging domain of social responsibility and political action (Alexander, 2004).
Resumo:
Background Ensuring efficient and effective delivery of health care to an ageing population has been a major driver for a review of the health workforce in Australia. As part of this process a National Registration and Accreditation Scheme (NRAS) has evolved with one goal being to improve workforce flexibility within a nationally consistent model of governance. In addition to increased flexibility, there have been discussions about maintaining standards and the role of specialisation. This study aims to explore the association between practitioners’ self-perceptions about their special interest in musculoskeletal, diabetes related and podopaediatric foot care and the actual podiatry services they deliver in Australia. Methods A cross sectional on-line survey was administered on behalf of the Australasian Podiatry Council and its’ state based member associations. Self-reported data were collected over a 3-week interval and captured information about the practitioners by gender, years of clinical experience, area of work by state, work setting, and location. For those participants that identified with an area of special interest or specialty, further questions were asked regarding support for the area of special interest through education, and activities performed in treating patients in the week prior to survey completion. Queensland University of Technology Human Research Ethics approval was sought and confirmed exemption from review. Results 218 podiatrists participated in the survey. Participants were predominately female and worked in private practices. The largest area of personal interest by the podiatrists was related to the field of musculoskeletal podiatry (n = 65), followed closely by diabetes foot care (n = 61), and a third area identified was in the management of podopaediatric conditions (n = 26). Conclusions Health workforce reform in Australia is in part being managed by the federal government with a goal to meet the health care needs of Australians into the future. The recognition of a specialty registration of podiatric surgery and endorsement for scheduled medicines was established with this workforce reform in mind. Addition of new subspecialties may be indicated based on professional development, to maintain high standards and meet community expectations.
Resumo:
We have analyzed the set of inter and intra base pair parameters for each dinucleotide step in single crystal structures of dodecamers, solved at high and medium resolution and all crystallized in P2(1)2(1)2(1) space group. The objective was to identify whether all the structures which have either the Drew-Dickerson (DD) sequence d[CGCGAATTCGCG] with some base modification or related sequence (non-DD), would display the same sequence dependent structural variability about its palindromic sequence, despite the molecule being bent at one end because of similar crystal lattice packing effect. Most of the local doublet parameters for base pairs steps G2-C3 and G10-C11 positions, symmetrically situated about the lateral twofold, were significantly correlated between themselves. In non-DD sequences, significant correlations between these positional parameters were absent. The different range of local step parameter values at each sequence position contributed to the gross feature of smooth helix axis bending in all structures. The base pair parameters in some of the positions, for medium resolution DD sequence, were quite unlike the high-resolution set and encompassed a higher range of values. Twist and slide are the two main parameters that show wider conformational range for the middle region of non-DD sequence structures in comparison to DD sequence structures. On the contrary, the minor and major groove features bear good resemblance between DD and non-DD sequence crystal structure datasets. The sugar-phosphate backbone torsion angles are similar in all structures, in sharp contrast to base pair parameter variation for high and low resolution DD and non-DD sequence structures, consisting of unusual (epsilon =g(-), xi =t) B-II conformation at the 10(th) position of the dodecamer sequence. Thus examining DD and non-DD sequence structures packed in the same crystal lattice arrangement, we infer that inter and intra base pair parameters are as symmetrically equivalent in its value as the symmetry related step for the palindromic DD sequence about lateral two-fold axis. This feature would lead us to agree with the conclusion that DNA conformation is not substantially affected by end-to-end or lateral inter-molecular interaction due to crystal lattice packing effect. Non-DD sequence structures acquire step parameter values which reflect the altered sequence at each of the dodecamer sequence position in the orthorhombic lattice while showing similar gross features of DD sequence structures
Resumo:
Abstract
Background: Automated closed loop systems may improve adaptation of the mechanical support to a patient's ventilatory needs and
facilitate systematic and early recognition of their ability to breathe spontaneously and the potential for discontinuation of
ventilation.
Objectives: To compare the duration of weaning from mechanical ventilation for critically ill ventilated adults and children when managed
with automated closed loop systems versus non-automated strategies. Secondary objectives were to determine differences
in duration of ventilation, intensive care unit (ICU) and hospital length of stay (LOS), mortality, and adverse events.
Search methods: We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2011, Issue 2); MEDLINE (OvidSP) (1948 to August 2011); EMBASE (OvidSP) (1980 to August 2011); CINAHL (EBSCOhost) (1982 to August 2011); and the Latin American and Caribbean Health Sciences Literature (LILACS). In addition we received and reviewed auto-alerts for our search strategy in MEDLINE, EMBASE, and CINAHL up to August 2012. Relevant published reviews were sought using the Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessment Database (HTA Database). We also searched the Web of Science Proceedings; conference proceedings; trial registration websites; and reference lists of relevant articles.
Selection criteria: We included randomized controlled trials comparing automated closed loop ventilator applications to non-automated weaning
strategies including non-protocolized usual care and protocolized weaning in patients over four weeks of age receiving invasive mechanical ventilation in an intensive care unit (ICU).
Data collection and analysis: Two authors independently extracted study data and assessed risk of bias. We combined data into forest plots using random-effects modelling. Subgroup and sensitivity analyses were conducted according to a priori criteria.
Main results: Pooled data from 15 eligible trials (14 adult, one paediatric) totalling 1173 participants (1143 adults, 30 children) indicated that automated closed loop systems reduced the geometric mean duration of weaning by 32% (95% CI 19% to 46%, P =0.002), however heterogeneity was substantial (I2 = 89%, P < 0.00001). Reduced weaning duration was found with mixed or
medical ICU populations (43%, 95% CI 8% to 65%, P = 0.02) and Smartcare/PS™ (31%, 95% CI 7% to 49%, P = 0.02) but not in surgical populations or using other systems. Automated closed loop systems reduced the duration of ventilation (17%, 95% CI 8% to 26%) and ICU length of stay (LOS) (11%, 95% CI 0% to 21%). There was no difference in mortality rates or hospital LOS. Overall the quality of evidence was high with the majority of trials rated as low risk.
Authors' conclusions: Automated closed loop systems may result in reduced duration of weaning, ventilation, and ICU stay. Reductions are more
likely to occur in mixed or medical ICU populations. Due to the lack of, or limited, evidence on automated systems other than Smartcare/PS™ and Adaptive Support Ventilation no conclusions can be drawn regarding their influence on these outcomes. Due to substantial heterogeneity in trials there is a need for an adequately powered, high quality, multi-centre randomized
controlled trial in adults that excludes 'simple to wean' patients. There is a pressing need for further technological development and research in the paediatric population.
Resumo:
Purpose
To compare the efficacy and safety of ranibizumab and bevacizumab intravitreal injections to treat neovascular age-related macular degeneration (nAMD).
Design
Multicenter, noninferiority factorial trial with equal allocation to groups. The noninferiority limit was 3.5 letters. This trial is registered (ISRCTN92166560).
Participants
People >50 years of age with untreated nAMD in the study eye who read =25 letters on the Early Treatment Diabetic Retinopathy Study chart.
Methods
We randomized participants to 4 groups: ranibizumab or bevacizumab, given either every month (continuous) or as needed (discontinuous), with monthly review.
Main Outcome Measures
The primary outcome is at 2 years; this paper reports a prespecified interim analysis at 1 year. The primary efficacy and safety outcome measures are distance visual acuity and arteriothrombotic events or heart failure. Other outcome measures are health-related quality of life, contrast sensitivity, near visual acuity, reading index, lesion morphology, serum vascular endothelial growth factor (VEGF) levels, and costs.
Results
Between March 27, 2008 and October 15, 2010, we randomized and treated 610 participants. One year after randomization, the comparison between bevacizumab and ranibizumab was inconclusive (bevacizumab minus ranibizumab -1.99 letters, 95% confidence interval [CI], -4.04 to 0.06). Discontinuous treatment was equivalent to continuous treatment (discontinuous minus continuous -0.35 letters; 95% CI, -2.40 to 1.70). Foveal total thickness did not differ by drug, but was 9% less with continuous treatment (geometric mean ratio [GMR], 0.91; 95% CI, 0.86 to 0.97; P = 0.005). Fewer participants receiving bevacizumab had an arteriothrombotic event or heart failure (odds ratio [OR], 0.23; 95% CI, 0.05 to 1.07; P = 0.03). There was no difference between drugs in the proportion experiencing a serious systemic adverse event (OR, 1.35; 95% CI, 0.80 to 2.27; P = 0.25). Serum VEGF was lower with bevacizumab (GMR, 0.47; 95% CI, 0.41 to 0.54; P<0.0001) and higher with discontinuous treatment (GMR, 1.23; 95% CI, 1.07 to 1.42; P = 0.004). Continuous and discontinuous treatment costs were £9656 and £6398 per patient per year for ranibizumab and £1654 and £1509 for bevacizumab; bevacizumab was less costly for both treatment regimens (P<0.0001).
Conclusions
The comparison of visual acuity at 1 year between bevacizumab and ranibizumab was inconclusive. Visual acuities with continuous and discontinuous treatment were equivalent. Other outcomes are consistent with the drugs and treatment regimens having similar efficacy and safety.
Financial Disclosure(s)
Proprietary or commercial disclosures may be found after the references.
Resumo:
Purpose: To report the secondary outcomes in the Carotenoids with Coantioxidants in Age-Related Maculopathy trial.
Design: Randomized double-masked placebo-controlled clinical trial (registered as ISRCTN 94557601).
Participants: Participants included 433 adults 55 years of age or older with early age-related macular degeneration (AMD) in 1 eye and late-stage disease in the fellow eye (group 1) or early AMD in both eyes (group 2).
Intervention: An oral preparation containing lutein (L), zeaxanthin (Z), vitamin C, vitamin E, copper, and zinc or placebo. Best-corrected visual acuity (BCVA), contrast sensitivity (CS), Raman spectroscopy, stereoscopic colour fundus photography, and serum sampling were performed every 6 months with a minimum follow-up time of 12 months.
Main Outcome Measures: Secondary outcomes included differences in BCVA (at 24 and 36 months), CS, Raman counts, serum antioxidant levels, and progression along the AMD severity scale (at 12, 24, and 36 months).
Results: The differential between active and placebo groups increased steadily, with average BCVA in the former being approximately 4.8 letters better than the latter for those who had 36 months of follow-up, and this difference was statistically significant (P = 0.04). In the longitudinal analysis, for a 1-log-unit increase in serum L, visual acuity was better by 1.4 letters (95% confidence interval, 0.3-2.5; P = 0.01), and a slower progression along a morphologic severity scale (P = 0.014) was observed.
Conclusions: Functional and morphologic benefits were observed in key secondary outcomes after supplementation with L, Z, and coantioxidants in persons with early AMD.
Financial Disclosure(s): The author(s) have no proprietary or commercial interest in any materials discussed in this article. © 2012 American Academy of Ophthalmology.