60 resultados para A priori
Resumo:
Purpose – Informed by the work of Laughlin and Booth, the paper analyses the role of accounting and accountability practices within the 15th century Roman Catholic Church, more specifically within the Diocese of Ferrara (northern Italy), in order to determine the presence of a sacred-secular dichotomy. Pope Eugenius IV had embarked upon a comprehensive reform of the Church to counter the spreading moral corruption within the clergy and the subsequent disaffection with the Church by many believers. The reforms were notable not only for the Pope’s determination to restore the moral authority and power of the Church but for the essential contributions of ‘profane’ financial and accounting practices to the success of the reforms.
Design/methodology/approach – Original 15th century Latin documents and account books of the Diocese of Ferrara are used to highlight the link between the new sacred values imposed by Pope Eugenius IV’s reforms and accounting and accountability practices.
Findings – The documents reveal that secular accounting and accountability practices were not regarded as necessarily antithetical to religious values, as would be expected by Laughlin and Booth. Instead, they were seen to assume a role which was complementary to the Church’s religious mission. Indeed, they were essential to its sacred mission during a period in which the Pope sought to arrest the moral decay of the clergy and reinstate the Church’s authority. Research implications/limitations – The paper shows that the sacred-secular dichotomy cannot be considered as a priori valid in space and time. There is also scope for examining other Italian dioceses where there was little evidence of Pope Eugenius’ reforms.
Originality/value – The paper presents a critique of the sacred-secular divide paradigm by considering an under-researched period and a non Anglo-Saxon context.
Resumo:
Mapped topographic features are important for understanding processes that sculpt the Earth’s surface. This paper presents maps that are the primary product of an exercise that brought together 27 researchers with an interest in landform mapping wherein the efficacy and causes of variation in mapping were tested using novel synthetic DEMs containing drumlins. The variation between interpreters (e.g. mapping philosophy, experience) and across the study region (e.g. woodland prevalence) opens these factors up to assessment. A priori known answers in the synthetics increase the number and strength of conclusions that may be drawn with respect to a traditional comparative study. Initial results suggest that overall detection rates are relatively low (34–40%), but reliability of mapping is higher (72–86%). The maps form a reference dataset.
Resumo:
Dietary pattern (DP) analysis allows examination of the combined effects of nutrients and foods on the markers of CVD. Very few studies have examined these relationships during adolescence or young adulthood. Traditional CVD risk biomarkers were analysed in 12-15-year-olds (n 487; Young Hearts (YH)1) and again in the same individuals at 20-25 years of age (n 487; YH3). Based on 7 d diet histories, in the present study, DP analysis was performed using a posteriori principal component analysis for the YH3 cohort and the a priori Mediterranean Diet Score (MDS) was calculated for both YH1 and YH3 cohorts. In the a posteriori DP analysis, YH3 participants adhering most closely to the 'healthy' DP were found to have lower pulse wave velocity (PWV) and homocysteine concentrations, the 'sweet tooth' DP were found to have increased LDL concentrations, systolic blood pressure, and diastolic blood pressure and decreased HDL concentrations, the 'drinker/social' DP were found to have lower LDL and homocysteine concentrations, but exhibited a trend towards a higher TAG concentration, and finally the 'Western' DP were found to have elevated homocysteine and HDL concentrations. In the a priori dietary score analysis, YH3 participants adhering most closely to the Mediterranean diet were found to exhibit a trend towards a lower PWV. MDS did not track between YH1 and YH3, and nor was there a longitudinal relationship between the change in the MDS and the change in CVD risk biomarkers. In conclusion, cross-sectional analysis revealed that some associations between DP and CVD risk biomarkers were already evident in the young adult population, namely the association between the healthy DP (and the MDS) and PWV; however, no longitudinal associations were observed between these relatively short time periods.
Resumo:
Reducing wafer metrology continues to be a major target in semiconductor manufacturing efficiency initiatives due to it being a high cost, non-value added operation that impacts on cycle-time and throughput. However, metrology cannot be eliminated completely given the important role it plays in process monitoring and advanced process control. To achieve the required manufacturing precision, measurements are typically taken at multiple sites across a wafer. The selection of these sites is usually based on a priori knowledge of wafer failure patterns and spatial variability with additional sites added over time in response to process issues. As a result, it is often the case that in mature processes significant redundancy can exist in wafer measurement plans. This paper proposes a novel methodology based on Forward Selection Component Analysis (FSCA) for analyzing historical metrology data in order to determine the minimum set of wafer sites needed for process monitoring. The paper also introduces a virtual metrology (VM) based approach for reconstructing the complete wafer profile from the optimal sites identified by FSCA. The proposed methodology is tested and validated on a wafer manufacturing metrology dataset. © 2012 IEEE.
Resumo:
Thermocouples are one of the most popular devices for temperature measurement due to their robustness, ease of manufacture and installation, and low cost. However, when used in certain harsh environments, for example, in combustion systems and engine exhausts, large wire diameters are required, and consequently the measurement bandwidth is reduced. This article discusses a software compensation technique to address the loss of high frequency fluctuations based on measurements from two thermocouples. In particular, a difference equation (DE) approach is proposed and compared with existing methods both in simulation and on experimental test rig data with constant flow velocity. It is found that the DE algorithm, combined with the use of generalized total least squares for parameter identification, provides better performance in terms of time constant estimation without any a priori assumption on the time constant ratios of the thermocouples.
Resumo:
Background: This is an update of a review last published in Issue 5, 2010, of The Cochrane Library. Reducing weaning time is desirable in minimizing potential complications from mechanical ventilation. Standardized weaning protocols are purported to reduce time spent on mechanical ventilation. However, evidence supporting their use in clinical practice is inconsistent. Objectives: The first objective of this review was to compare the total duration of mechanical ventilation of critically ill adults who were weaned using protocols versus usual (non-protocolized) practice.The second objective was to ascertain differences between protocolized and non-protocolized weaning in outcomes measuring weaning duration, harm (adverse events) and resource use (intensive care unit (ICU) and hospital length of stay, cost).The third objective was to explore, using subgroup analyses, variations in outcomes by type of ICU, type of protocol and approach to delivering the protocol (professional-led or computer-driven). Search methods: We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library Issue 1, 2014), MEDLINE (1950 to January 2014), EMBASE (1988 to January 2014), CINAHL (1937 to January 2014), LILACS (1982 to January 2014), ISI Web of Science and ISI Conference Proceedings (1970 to February 2014), and reference lists of articles. We did not apply language restrictions. The original search was performed in January 2010 and updated in January 2014.Selection criteriaWe included randomized controlled trials (RCTs) and quasi-RCTs of protocolized weaning versus non-protocolized weaning from mechanical ventilation in critically ill adults. Data collection and analysis: Two authors independently assessed trial quality and extracted data. We performed a priori subgroup and sensitivity analyses. We contacted study authors for additional information. Main results: We included 17 trials (with 2434 patients) in this updated review. The original review included 11 trials. The total geometric mean duration of mechanical ventilation in the protocolized weaning group was on average reduced by 26% compared with the usual care group (N = 14 trials, 95% confidence interval (CI) 13% to 37%, P = 0.0002). Reductions were most likely to occur in medical, surgical and mixed ICUs, but not in neurosurgical ICUs. Weaning duration was reduced by 70% (N = 8 trials, 95% CI 27% to 88%, P = 0.009); and ICU length of stay by 11% (N = 9 trials, 95% CI 3% to 19%, P = 0.01). There was significant heterogeneity among studies for total duration of mechanical ventilation (I2 = 67%, P < 0.0001) and weaning duration (I2 = 97%, P < 0.00001), which could not be explained by subgroup analyses based on type of unit or type of approach. Authors' conclusions: There is evidence of reduced duration of mechanical ventilation, weaning duration and ICU length of stay with use of standardized weaning protocols. Reductions are most likely to occur in medical, surgical and mixed ICUs, but not in neurosurgical ICUs. However, significant heterogeneity among studies indicates caution in generalizing results. Some study authors suggest that organizational context may influence outcomes, however these factors were not considered in all included studies and could not be evaluated. Future trials should consider an evaluation of the process of intervention delivery to distinguish between intervention and implementation effects. There is an important need for further development and research in the neurosurgical population.
Resumo:
Background Automated closed loop systems may improve adaptation of mechanical support for a patient's ventilatory needs and facilitate systematic and early recognition of their ability to breathe spontaneously and the potential for discontinuation of ventilation. This review was originally published in 2013 with an update published in 2014. Objectives The primary objective for this review was to compare the total duration of weaning from mechanical ventilation, defined as the time from study randomization to successful extubation (as defined by study authors), for critically ill ventilated patients managed with an automated weaning system versus no automated weaning system (usual care). Secondary objectives for this review were to determine differences in the duration of ventilation, intensive care unit (ICU) and hospital lengths of stay (LOS), mortality, and adverse events related to early or delayed extubation with the use of automated weaning systems compared to weaning in the absence of an automated weaning system. Search methods We searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2013, Issue 8); MEDLINE (OvidSP) (1948 to September 2013); EMBASE (OvidSP) (1980 to September 2013); CINAHL (EBSCOhost) (1982 to September 2013); and the Latin American and Caribbean Health Sciences Literature (LILACS). Relevant published reviews were sought using the Database of Abstracts of Reviews of Effects (DARE) and the Health Technology Assessment Database (HTA Database). We also searched the Web of Science Proceedings; conference proceedings; trial registration websites; and reference lists of relevant articles. The original search was run in August 2011, with database auto-alerts up to August 2012. Selection criteria We included randomized controlled trials comparing automated closed loop ventilator applications to non-automated weaning strategies including non-protocolized usual care and protocolized weaning in patients over four weeks of age receiving invasive mechanical ventilation in an ICU. Data collection and analysis Two authors independently extracted study data and assessed risk of bias. We combined data in forest plots using random-effects modelling. Subgroup and sensitivity analyses were conducted according to a priori criteria. Main results We included 21 trials (19 adult, two paediatric) totaling 1676 participants (1628 adults, 48 children) in this updated review. Pooled data from 16 eligible trials reporting weaning duration indicated that automated closed loop systems reduced the geometric mean duration of weaning by 30% (95% confidence interval (CI) 13% to 45%), however heterogeneity was substantial (I2 = 87%, P < 0.00001). Reduced weaning duration was found with mixed or medical ICU populations (42%, 95% CI 10% to 63%) and Smartcare/PS™ (28%, 95% CI 7% to 49%) but not in surgical populations or using other systems. Automated closed loop systems reduced the duration of ventilation (10%, 95% CI 3% to 16%) and ICU LOS (8%, 95% CI 0% to 15%). There was no strong evidence of an effect on mortality rates, hospital LOS, reintubation rates, self-extubation and use of non-invasive ventilation following extubation. Prolonged mechanical ventilation > 21 days and tracheostomy were reduced in favour of automated systems (relative risk (RR) 0.51, 95% CI 0.27 to 0.95 and RR 0.67, 95% CI 0.50 to 0.90 respectively). Overall the quality of the evidence was high with the majority of trials rated as low risk. Authors' conclusions Automated closed loop systems may result in reduced duration of weaning, ventilation and ICU stay. Reductions are more likely to occur in mixed or medical ICU populations. Due to the lack of, or limited, evidence on automated systems other than Smartcare/PS™ and Adaptive Support Ventilation no conclusions can be drawn regarding their influence on these outcomes. Due to substantial heterogeneity in trials there is a need for an adequately powered, high quality, multi-centre randomized controlled trial in adults that excludes 'simple to wean' patients. There is a pressing need for further technological development and research in the paediatric population.
Resumo:
Background: Pedigree reconstruction using genetic analysis provides a useful means to estimate fundamental population biology parameters relating to population demography, trait heritability and individual fitness when combined with other sources of data. However, there remain limitations to pedigree reconstruction in wild populations, particularly in systems where parent-offspring relationships cannot be directly observed, there is incomplete sampling of individuals, or molecular parentage inference relies on low quality DNA from archived material. While much can still be inferred from incomplete or sparse pedigrees, it is crucial to evaluate the quality and power of available genetic information a priori to testing specific biological hypotheses. Here, we used microsatellite markers to reconstruct a multi-generation pedigree of wild Atlantic salmon (Salmo salar L.) using archived scale samples collected with a total trapping system within a river over a 10 year period. Using a simulation-based approach, we determined the optimal microsatellite marker number for accurate parentage assignment, and evaluated the power of the resulting partial pedigree to investigate important evolutionary and quantitative genetic characteristics of salmon in the system.
Results: We show that at least 20 microsatellites (ave. 12 alleles/locus) are required to maximise parentage assignment and to improve the power to estimate reproductive success and heritability in this study system. We also show that 1.5 fold differences can be detected between groups simulated to have differing reproductive success, and that it is possible to detect moderate heritability values for continuous traits (h(2) similar to 0.40) with more than 80% power when using 28 moderately to highly polymorphic markers.
Conclusion: The methodologies and work flow described provide a robust approach for evaluating archived samples for pedigree-based research, even where only a proportion of the total population is sampled. The results demonstrate the feasibility of pedigree-based studies to address challenging ecological and evolutionary questions in free-living populations, where genealogies can be traced only using molecular tools, and that significant increases in pedigree assignment power can be achieved by using higher numbers of markers.
Resumo:
Policymakers have largely replaced Single Bounded Discrete Choice (SBDC) valuation by the more statistically efficient repetitive methods; Double Bounded Discrete Choice (DBDC) and Discrete Choice Experiments (DCE) . Repetitive valuation permits classification into rational preferences: (i) a priori well-formed; (ii) consistent non-arbitrary values “discovered” through repetition and experience; (Plott, 1996; List 2003) and irrational preferences; (iii) consistent but arbitrary values as “shaped” by preceding bid level (Tufano, 2010; Ariely et al., 2003) and (iv) inconsistent and arbitrary values. Policy valuations should demonstrate behaviorally rational preferences. We outline novel methods for testing this in DBDC applied to renewable energy premiums in Chile.
Resumo:
Chromatin immunoprecipitation (ChIP) is an invaluable tool in the study of transcriptional regulation. ChIP methods require both a priori knowledge of the transcriptional regulators which are important for a given biological system and high-quality specific antibodies for these targets. The androgen receptor (AR) is known to play essential roles in male sexual development, in prostate cancer and in the function of many other AR-expressing cell types (e.g. neurons and myocytes). As a ligand-activated transcription factor the AR also represents an endogenous, inducible system to study transcriptional biology. Therefore, ChIP studies of the AR can make use of treatment contrast experiments to define its transcriptional targets. To date several studies have mapped AR binding sites using ChIP in combination with genome tiling microarrays (ChIP-chip) or direct sequencing (ChIP-seq), mainly in prostate cancer cell lines and with varying degrees of genomic coverage. These studies have provided new insights into the DNA sequences to which the AR can bind, identified AR cooperating transcription factors, mapped thousands of potential AR regulated genes and provided insights into the biological processes regulated by the AR. However, further ChIP studies will be required to fully characterise the dynamics of the AR-regulated transcriptional programme, to map the occupancy of different AR transcriptional complexes which result in different transcriptional output and to delineate the transcriptional networks downstream of the AR.
Resumo:
Increased understanding of knowledge transfer (KT) from universities to the wider regional knowledge ecosystem offers opportunities for increased regional innovation and commercialisation. The aim of this article is to improve the understanding of the KT phenomena in an open innovation context where multiple diverse quadruple helix stakeholders are interacting. An absorptive capacity-based conceptual framework is proposed, using a priori constructs which portrays the multidimensional process of KT between universities and its constituent stakeholders in pursuit of open innovation and commercialisation. Given the lack of overarching theory in the field, an exploratory, inductive theory building methodology was adopted using semi-structured interviews, document analysis and longitudinal observation data over a three-year period. The findings identify five factors, namely human centric factors, organisational factors, knowledge characteristics, power relationships and network characteristics, which mediate both the ability of stakeholders to engage in KT and the effectiveness of knowledge acquisition, assimilation, transformation and exploitation. This research has implications for policy makers and practitioners by identifying the need to implement interventions to overcome the barriers to KT effectiveness between regional quadruple helix stakeholders within an open innovation ecosystem.
Resumo:
For physicians facing patients with organ-limited metastases from colorectal cancer, tumor shrinkage and sterilization of micrometastatic disease is the main goal, giving the opportunity for secondary surgical resection. At the same time, for the majority of patients who will not achieve a sufficient tumor response, disease control remains the predominant objective. Since FOLFOX or FOLFIRI have similar efficacies, the challenge is to define which could be the most effective targeted agent (anti-EGFR or anti-VEGF) to reach these goals. Therefore, a priori molecular identification of patients that could benefit from anti-EGFR or anti-VEGF monoclonal antibodies (i.e. the currently approved targeted therapies for metastatic colorectal cancer) is of critical importance. In this setting, the KRAS mutation status was the first identified predictive marker of response to anti-EGFR therapy. Since it has been demonstrated that tumors with KRAS mutation do not respond to anti-EGFR therapy, KRAS status must be determined prior to treatment. Thus, for KRAS wild-type patients, the choices that remain are either anti-VEGF or anti-EGFR. In this review, we present the most updated data from translational research programs dealing with the identification of biomarkers for response to targeted therapies.
Resumo:
Numerous experimental studies of damage in composite laminates have shown that intralaminar (in-plane) matrix cracks lead to interlaminar delamination (out-of-plane) at ply interfaces. The smearing of in-plane cracks over a volume, as a consequence of the use of continuum damage mechanics, does not always effectively capture the full extent of the interaction between the two failure mechanisms. A more accurate representation is obtained by adopting a discrete crack approach via the use of cohesive elements, for both in-plane and out-of-plane damage. The difficulty with cohesive elements is that their location must be determined a priori in order to generate the model; while ideally the position of the crack migration, and more generally the propagation path, should be obtained as part of the problem’s solution. With the aim of enhancing current modelling capabilities with truly predictive capabilities, a concept of automatic insertion of interface elements is utilized. The consideration of a simple traction criterion in relation to material strength, evaluated at each node of the model (or of the regions of the model where it is estimated cracks might form), allows for the determination of initial crack location and subsequent propagation by the insertion of cohesive elements during the course of the analysis. Several experimental results are modelled using the commercial package ABAQUS/Standard with an automatic insertion subroutine developed in this work, and the results are presented to demonstrate the capabilities of this technique.
Resumo:
Introduction: Fewer than 50% of adults and 40% of youth meet US CDC guidelines for physical activity (PA) with the built environment (BE) a culprit for limited PA. A challenge in evaluating policy and BE change is the forethought to capture a priori PA behaviors and the ability to eliminate bias in post-change environments. The present objective was to analyze existing public data feeds to quantify effectiveness of BE interventions. The Archive of Many Outdoor Scenes (AMOS) has collected 135 million images of outdoor environments from 12,000 webcams since 2006. Many of these environments have experienced BE change. Methods: One example of BE change is the addition of protected bike lanes and a bike share program in Washington, DC.Weselected an AMOS webcam that captured this change. AMOS captures a photograph from eachwebcamevery half hour.AMOScaptured the 120 webcam photographs between 0700 and 1900 during the first work week of June 2009 and the 120 photographs from the same week in 2010. We used the Amazon Mechanical Turk (MTurk) website to crowd-source the image annotation. MTurk workers were paid US$0.01 to mark each pedestrian, cyclist and vehicle in a photograph. Each image was coded 5 unique times (n=1200). The data, counts of transportation mode, was downloaded to SPSS for analysis. Results: The number of cyclists per scene increased four-fold between 2009 and 2010 (F=36.72, p=0.002). There was no significant increase in pedestrians between the two years, however there was a significant increase in number of vehicles per scene (F=16.81, p
Resumo:
Coastal and estuarine landforms provide a physical template that not only accommodates diverse ecosystem functions and human activities, but also mediates flood and erosion risks that are expected to increase with climate change. In this paper, we explore some of the issues associated with the conceptualisation and modelling of coastal morphological change at time and space scales relevant to managers and policy makers. Firstly, we revisit the question of how to define the most appropriate scales at which to seek quantitative predictions of landform change within an age defined by human interference with natural sediment systems and by the prospect of significant changes in climate and ocean forcing. Secondly, we consider the theoretical bases and conceptual frameworks for determining which processes are most important at a given scale of interest and the related problem of how to translate this understanding into models that are computationally feasible, retain a sound physical basis and demonstrate useful predictive skill. In particular, we explore the limitations of a primary scale approach and the extent to which these can be resolved with reference to the concept of the coastal tract and application of systems theory. Thirdly, we consider the importance of different styles of landform change and the need to resolve not only incremental evolution of morphology but also changes in the qualitative dynamics of a system and/or its gross morphological configuration. The extreme complexity and spatially distributed nature of landform systems means that quantitative prediction of future changes must necessarily be approached through mechanistic modelling of some form or another. Geomorphology has increasingly embraced so-called ‘reduced complexity’ models as a means of moving from an essentially reductionist focus on the mechanics of sediment transport towards a more synthesist view of landform evolution. However, there is little consensus on exactly what constitutes a reduced complexity model and the term itself is both misleading and, arguably, unhelpful. Accordingly, we synthesise a set of requirements for what might be termed ‘appropriate complexity modelling’ of quantitative coastal morphological change at scales commensurate with contemporary management and policy-making requirements: 1) The system being studied must be bounded with reference to the time and space scales at which behaviours of interest emerge and/or scientific or management problems arise; 2) model complexity and comprehensiveness must be appropriate to the problem at hand; 3) modellers should seek a priori insights into what kind of behaviours are likely to be evident at the scale of interest and the extent to which the behavioural validity of a model may be constrained by its underlying assumptions and its comprehensiveness; 4) informed by qualitative insights into likely dynamic behaviour, models should then be formulated with a view to resolving critical state changes; and 5) meso-scale modelling of coastal morphological change should reflect critically on the role of modelling and its relation to the observable world.