5 resultados para Learned family (William Learned, d. 1646)
em Aston University Research Archive
Resumo:
Interpolated data are an important part of the environmental information exchange as many variables can only be measured at situate discrete sampling locations. Spatial interpolation is a complex operation that has traditionally required expert treatment, making automation a serious challenge. This paper presents a few lessons learnt from INTAMAP, a project that is developing an interoperable web processing service (WPS) for the automatic interpolation of environmental data using advanced geostatistics, adopting a Service Oriented Architecture (SOA). The “rainbow box” approach we followed provides access to the functionality at a whole range of different levels. We show here how the integration of open standards, open source and powerful statistical processing capabilities allows us to automate a complex process while offering users a level of access and control that best suits their requirements. This facilitates benchmarking exercises as well as the regular reporting of environmental information without requiring remote users to have specialized skills in geostatistics.
Resumo:
Technological capabilities in Chinese manufacturing have been transformed in the last three decades. However, the extent to which and how domestic market oriented state owned enterprises (SOEs) have developed their capabilities remain important questions. The East Asian latecomer model has been adapted to study six Chinese SOEs in the automotive, steel and machine tools sectors to assess capability levels attained and the role of external sources and internal efforts in developing them. All six enterprises demonstrate high competence in operating established technology, managing investment and making product and process improvements but differ in innovative capability. While the East Asian latecomer model in which linking, leveraging and learning explain technological capability development is relevant for the companies studied, it needs to be adapted for Chinese SOEs to take account of types of external links and leverage of enterprises, the role of government, enterprise level management motives and means of financing development.
Resumo:
While knowledge about standardization of skin protection against ultraviolet radiation (UVR) has progressed over the past few decades, there is no uniform and generally accepted standardized measurement for UV eye protection. The literature provides solid evidence that UV can induce considerable damage to structures of the eye. As well as damaging the eyelids and periorbital skin, chronic UV exposure may also affect the conjunctiva and lens. Clinically, this damage can manifest as skin cancer and premature skin ageing as well as the development of pterygia and premature cortical cataracts. Modern eye protection, used daily, offers the opportunity to prevent these adverse sequelae of lifelong UV exposure. A standardized, reliable and comprehensive label for consumers and professionals is currently lacking. In this review we (i) summarize the existing literature about UV radiation-induced damage to the eye and surrounding skin; (ii) review the recent technological advances in UV protection by means of lenses; (iii) review the definition of the Eye-Sun Protection Factor (E-SPF®), which describes the intrinsic UV protection properties of lenses and lens coating materials based on their capacity to absorb or reflect UV radiation; and (iv) propose a strategy for establishing the biological relevance of the E-SPF. © 2013 John Wiley & Sons A/S.
Resumo:
Latent topics derived by topic models such as Latent Dirichlet Allocation (LDA) are the result of hidden thematic structures which provide further insights into the data. The automatic labelling of such topics derived from social media poses however new challenges since topics may characterise novel events happening in the real world. Existing automatic topic labelling approaches which depend on external knowledge sources become less applicable here since relevant articles/concepts of the extracted topics may not exist in external sources. In this paper we propose to address the problem of automatic labelling of latent topics learned from Twitter as a summarisation problem. We introduce a framework which apply summarisation algorithms to generate topic labels. These algorithms are independent of external sources and only rely on the identification of dominant terms in documents related to the latent topic. We compare the efficiency of existing state of the art summarisation algorithms. Our results suggest that summarisation algorithms generate better topic labels which capture event-related context compared to the top-n terms returned by LDA. © 2014 Association for Computational Linguistics.