43 resultados para methodology of dialectical mediation
Resumo:
The level of agreement between climate model simulations and observed surface temperature change is a topic of scientific and policy concern. While the Earth system continues to accumulate energy due to anthropogenic and other radiative forcings, estimates of recent surface temperature evolution fall at the lower end of climate model projections. Global mean temperatures from climate model simulations are typically calculated using surface air temperatures, while the corresponding observations are based on a blend of air and sea surface temperatures. This work quantifies a systematic bias in model-observation comparisons arising from differential warming rates between sea surface temperatures and surface air temperatures over oceans. A further bias arises from the treatment of temperatures in regions where the sea ice boundary has changed. Applying the methodology of the HadCRUT4 record to climate model temperature fields accounts for 38% of the discrepancy in trend between models and observations over the period 1975–2014.
Resumo:
Modern methods of spawning new technological motifs are not appropriate when it is desired to realize artificial life as an actual real world entity unto itself (Pattee 1995; Brooks 2006; Chalmers 1995). Many fundamental aspects of such a machine are absent in common methods, which generally lack methodologies of construction. In this paper we mix classical and modern studies in order to attempt to realize an artificial life form from first principles. A model of an algorithm is introduced, its methodology of construction is presented, and the fundamental source from which it sprang is discussed.
Resumo:
In developing Isotype, Otto Neurath and his colleagues were the first to systematically explore a consistent visual language as part of an encyclopedic approach to representing all aspects of the physical world. The pictograms used in Isotype have a secure legacy in today's public information symbols, but Isotype was more than this: it was designed to communicate social facts memorably to less educated groups, including schoolchildren and workers, reflecting its initial testing ground in the socialist municipality of Vienna during the 1920s. The social engagement and methodology of Isotype are examined here in order to draw some lessons for information design today.
Resumo:
A novel partitioned least squares (PLS) algorithm is presented, in which estimates from several simple system models are combined by means of a Bayesian methodology of pooling partial knowledge. The method has the added advantage that, when the simple models are of a similar structure, it lends itself directly to parallel processing procedures, thereby speeding up the entire parameter estimation process by several factors.
Resumo:
Impact Assessments (IAs) were introduced at the EU level under the rhetorical facade of ‘better regulation’. The actual aim was to improve not only the quality but also the reputation of EU regulation before stakeholders. However, evidence brought forward by a number of evaluations pointed out that IAs are yet to achieve acceptable quality standards. The paper offers an overview of different disciplinary approaches for looking at IAs. It suggests that risk regulation encompasses the theoretical foundations to help understand the role of IAs in the EU decisionmaking process. The analysis of 60 early days preliminary IAs provides empirical evidence regarding policy alternatives, methodology of consultation and use of quantitative techniques. Findings suggest that dawn period IAs were used mainly to provide some empirical evidence for regulatory intervention in front of stakeholders. The paper concludes with assumptions about the future role of IAs at EU level.
Resumo:
A live work where digital and analogue media collide. This work uses the Internet as a central point of departure in that the script is taken from the Wikipedia entry for the word 'slideshow'. Words are randomly extracted and transferred onto photographic 35mm slide to be projected with analogue carousel slide projectors taking the audience into a visual wordplay, from Google to PowerPoint presentation. The sound of projectors is manipulated gradually into a clashing, confrontational, digital/analogue crescendo. 'Slideshow' investigates how information is sourced, navigated and considered in a culture of accelerating mediation. It posits the notion of a post-digital era in which we are increasingly faced with challenging questions of authenticity and authority.
Resumo:
How do changing notions of children’s reading practices alter or even create classic texts? This article looks at how the nineteenth-century author Jules Verne (1828-1905) was modernised by Hachette for their Bibliothèque Verte children’s collection in the 1950s and 60s. Using the methodology of adaptation studies, the article reads the abridged texts in the context of the concerns that emerged in postwar France about what children were reading. It examines how these concerns shaped editorial policy, and the transformations that Verne’s texts underwent before they were considered suitable for the children of the baby-boom generation. It asks whether these adapted versions damaged Verne’s reputation, as many literary scholars have suggested, or if the process of dividing his readership into children and adults actually helped to reinforce the new idea of his texts as complex and multilayered. In so doing, this article provides new insights into the impact of postwar reforms on children’s publishing and explores the complex interplay between abridgment, censorship, children’s literature and the adult canon.
Resumo:
The frequencies of atmospheric blocking in both winter and summer and the changes in them from the 20th to the 21st centuries as simulated in twelve CMIP5 models is analysed. The RCP 8.5 high emission scenario runs are used to represent the 21st century. The analysis is based on the wave-breaking methodology of Pelly and Hoskins (2003a). It differs from the Tibaldi and Molteni (1990) index in viewing equatorward cut-off lows and poleward blocking highs in equal manner as indicating a disruption to the westerlies. 1-dimensional and 2-dimensional diagnostics are applied to identify blocking of the mid-latitude storm-track and also at higher latitudes. Winter blocking frequency is found to be generally underestimated. The models give a decrease in the European blocking maximum in the 21st century, consistent with the results in other studies. There is a mean 21st century winter poleward shift of high- latitude blocking, but little agreement between the models on the details. In summer, Eurasian blocking is also underestimated in the models, whereas it is now too large over the high-latitude ocean basins. A decrease in European blocking frequency in the 21st century model runs is again found. However in summer there is a clear eastward shift of blocking over Eastern Europe and Western Russia, in a region close to the blocking that dominated the Russian summer of 2010. While summer blocking decreases in general, the poleward shift of the storm track into the region of frequent high latitude blocking may mean that the incidence of storms being obstructed by blocks may actually increase.
Resumo:
J.L. Austin is regarded as having an especially acute ear for fine distinctions of meaning overlooked by other philosophers. Austin employs an informal experimental approach to gathering evidence in support of these fine distinctions in meaning, an approach that has become a standard technique for investigating meaning in both philosophy and linguistics. In this paper, we subject Austin's methods to formal experimental investigation. His methods produce mixed results: We find support for his most famous distinction, drawn on the basis of his `donkey stories', that `mistake' and `accident' apply to different cases, but not for some of his other attempts to distinguish the meaning of philosophically significant terms (such as `intentionally' and `deliberately'). We critically examine the methodology of informal experiments employed in ordinary language philosophy and much of contemporary philosophy of language and linguistics, and discuss the role that experimenter bias can play in influencing judgments about informal and formal linguistic experiments.
Resumo:
Recently, the original benchmarking methodology of the Sustainable Value approach became subjected to serious debate. While Kuosmanen and Kuosmanen (2009b) critically question its validity introducing productive efficiency theory, Figge and Hahn (2009) put forward that the implementation of productive efficiency theory severely conflicts with the original financial economics perspective of the Sustainable Value approach. We argue that the debate is very confusing because the original Sustainable Value approach presents two largely incompatible objectives. Nevertheless, we maintain that both ways of benchmarking could provide useful and moreover complementary insights. If one intends to present the overall resource efficiency of the firm from the investor's viewpoint, we recommend the original benchmarking methodology. If one on the other hand aspires to create a prescriptive tool setting up some sort of reallocation scheme, we advocate implementation of the productive efficiency theory. Although the discussion on benchmark application is certainly substantial, we should avoid the debate to become accordingly narrowed. Next to the benchmark concern, we see several other challenges considering the development of the Sustainable Value approach: (1) a more systematic resource selection, (2) the inclusion of the value chain and (3) additional analyses related to policy in order to increase interpretative power.
Resumo:
This paper concerns the philosophical significance of a choice about how to design the context-shifting experiments used by contextualists and anti-intellectualists: Should contexts be judged jointly, with contrast, or separately, without contrast? Findings in experimental psychology suggest (1) that certain contextual features are difficult to evaluate when considered separately, and there are reasons to think that one feature that interests contextualists and anti- intellectualists—stakes or importance—is such a difficult to evaluate attribute, and (2) that joint evaluation of contexts can yield judgments that are more reflective and rational in certain respects. With those two points in mind, a question is raised about what source of evidence provides better support for philosophical theories of how contextual features affect knowledge ascriptions and evidence: Should we prefer evidence consisting of "ordinary" judgments, or more reflective, perhaps more rational judgments? That question is answered in relation to different accounts of what such theories aim to explain, and it is concluded that evidence from contexts evaluated jointly should be an important source of evidence for contextualist and anti-intellectualist theories, a conclusion that is at odds with the methodology of some recent studies in experimental epistemology.
Resumo:
The main aim of this chapter is to offer an overview of research that has adopted the methodology of Corpus Linguistics to study aspects of language use in the media. The overview begins by introducing the key principles and analytical tools adopted in corpus research. To demonstrate the contribution of corpus approaches to media linguistics, a selection of recent corpus studies is subsequently discussed. The final section summarises the strengths and limitations of corpus approaches and discusses avenues for further research.
Resumo:
This paper describes the methodology of providing multiprobability predictions for proteomic mass spectrometry data. The methodology is based on a newly developed machine learning framework called Venn machines. Is allows to output a valid probability interval. The methodology is designed for mass spectrometry data. For demonstrative purposes, we applied this methodology to MALDI-TOF data sets in order to predict the diagnosis of heart disease and early diagnoses of ovarian cancer and breast cancer. The experiments showed that probability intervals are narrow, that is, the output of the multiprobability predictor is similar to a single probability distribution. In addition, probability intervals produced for heart disease and ovarian cancer data were more accurate than the output of corresponding probability predictor. When Venn machines were forced to make point predictions, the accuracy of such predictions is for the most data better than the accuracy of the underlying algorithm that outputs single probability distribution of a label. Application of this methodology to MALDI-TOF data sets empirically demonstrates the validity. The accuracy of the proposed method on ovarian cancer data rises from 66.7 % 11 months in advance of the moment of diagnosis to up to 90.2 % at the moment of diagnosis. The same approach has been applied to heart disease data without time dependency, although the achieved accuracy was not as high (up to 69.9 %). The methodology allowed us to confirm mass spectrometry peaks previously identified as carrying statistically significant information for discrimination between controls and cases.