85 resultados para telelettura consumi idrici Smart Automatic Meter Reading acquedotto Fano (PU) ASET
Resumo:
Considers the application of value management during the briefing and outline design stages of building developments.
Resumo:
User-generated content (UGC) is attracting a great deal of interest - some of it effective, some misguided. This article reviews the marketing-related factors that gave rise to UGC, tracing the relevant development of market orientation, social interaction, word of mouth, brand relationships, consumer creativity, co-creation, and customization, largely through the pages of the Journal of Advertising Research over the last 40 (or so) of its 50 years. The authors then discuss the characteristic features of UGC and how they differ from (and are similar to) these concepts. The insights thus gained will help practitioners and researchers understand what UGC is (and is not) and how it should (and should not) be used.
Resumo:
Interpenetrating polymeric networks based on sodium alginate and poly(N-isopropylacrylamide) (PNIPAAm) covalently crosslinked with N,N′-methylenebisacrylamide have been investigated using rheology, thermogravimetry, differential scanning calorimetry, X-ray diffraction measurements and scanning electron microscopy (SEM). An improved elastic response of the samples with a higher PNIPAAm content and increased amount of crosslinking agent was found. The temperature-responsive behaviour of the hydrogel samples was evidenced by viscoelastic measurements performed at various temperatures. It is shown that the properties of these gels can be tuned according to composition, amount of crosslinking agent and temperature changes. X-ray scattering analysis revealed that the hydrophobic groups are locally segregated even in the swollen state whilst cryo-SEM showed the highly heterogeneous nature of the gels.
Resumo:
World-wide structural genomics initiatives are rapidly accumulating structures for which limited functional information is available. Additionally, state-of-the art structural prediction programs are now capable of generating at least low resolution structural models of target proteins. Accurate detection and classification of functional sites within both solved and modelled protein structures therefore represents an important challenge. We present a fully automatic site detection method, FuncSite, that uses neural network classifiers to predict the location and type of functionally important sites in protein structures. The method is designed primarily to require only backbone residue positions without the need for specific side-chain atoms to be present. In order to highlight effective site detection in low resolution structural models FuncSite was used to screen model proteins generated using mGenTHREADER on a set of newly released structures. We found effective metal site detection even for moderate quality protein models illustrating the robustness of the method.
Resumo:
Spontaneous mimicry is a marker of empathy. Conditions characterized by reduced spontaneous mimicry (e.g., autism) also display deficits in sensitivity to social rewards. We tested if spontaneous mimicry of socially rewarding stimuli (happy faces) depends on the reward value of stimuli in 32 typical participants. An evaluative conditioning paradigm was used to associate different reward values with neutral target faces. Subsequently, electromyographic activity over the Zygomaticus Major was measured whilst participants watched video clips of the faces making happy expressions. Higher Zygomaticus Major activity was found in response to happy faces conditioned with high reward versus low reward. Moreover, autistic traits in the general population modulated the extent of spontaneous mimicry of happy faces. This suggests a link between reward and spontaneous mimicry and provides a possible underlying mechanism for the reduced response to social rewards seen in autism.
Resumo:
There are many published methods available for creating keyphrases for documents. Previous work in the field has shown that in a significant proportion of cases author selected keyphrases are not appropriate for the document they accompany. This requires the use of such automated methods to improve the use of keyphrases. Often the keyphrases are not updated when the focus of a paper changes or include keyphrases that are more classificatory than explanatory. The published methods are all evaluated using different corpora, typically one relevant to their field of study. This not only makes it difficult to incorporate the useful elements of algorithms in future work but also makes comparing the results of each method inefficient and ineffective. This paper describes the work undertaken to compare five methods across a common baseline of six corpora. The methods chosen were term frequency, inverse document frequency, the C-Value, the NC-Value, and a synonym based approach. These methods were compared to evaluate performance and quality of results, and to provide a future benchmark. It is shown that, with the comparison metric used for this study Term Frequency and Inverse Document Frequency were the best algorithms, with the synonym based approach following them. Further work in the area is required to determine an appropriate (or more appropriate) comparison metric.
Resumo:
Flood extents caused by fluvial floods in urban and rural areas may be predicted by hydraulic models. Assimilation may be used to correct the model state and improve the estimates of the model parameters or external forcing. One common observation assimilated is the water level at various points along the modelled reach. Distributed water levels may be estimated indirectly along the flood extents in Synthetic Aperture Radar (SAR) images by intersecting the extents with the floodplain topography. It is necessary to select a subset of levels for assimilation because adjacent levels along the flood extent will be strongly correlated. A method for selecting such a subset automatically and in near real-time is described, which would allow the SAR water levels to be used in a forecasting model. The method first selects candidate waterline points in flooded rural areas having low slope. The waterline levels and positions are corrected for the effects of double reflections between the water surface and emergent vegetation at the flood edge. Waterline points are also selected in flooded urban areas away from radar shadow and layover caused by buildings, with levels similar to those in adjacent rural areas. The resulting points are thinned to reduce spatial autocorrelation using a top-down clustering approach. The method was developed using a TerraSAR-X image from a particular case study involving urban and rural flooding. The waterline points extracted proved to be spatially uncorrelated, with levels reasonably similar to those determined manually from aerial photographs, and in good agreement with those of nearby gauges.
Resumo:
This paper presents the notion of Context-based Activity Design (CoBAD) that represents context with its dynamic changes and normative activities in an interactive system design. The development of CoBAD requires an appropriate context ontology model and inference mechanisms. The incorporation of norms and information field theory into Context State Transition Model, and the implementation of new conflict resolution strategies based on the specific situation are discussed. A demonstration of CoBAD using a human agent scenario in a smart home is also presented. Finally, a method of treating conflicting norms in multiple information fields is proposed.
Resumo:
Keyphrases are added to documents to help identify the areas of interest they contain. However, in a significant proportion of papers author selected keyphrases are not appropriate for the document they accompany: for instance, they can be classificatory rather than explanatory, or they are not updated when the focus of the paper changes. As such, automated methods for improving the use of keyphrases are needed, and various methods have been published. However, each method was evaluated using a different corpus, typically one relevant to the field of study of the method’s authors. This not only makes it difficult to incorporate the useful elements of algorithms in future work, but also makes comparing the results of each method inefficient and ineffective. This paper describes the work undertaken to compare five methods across a common baseline of corpora. The methods chosen were Term Frequency, Inverse Document Frequency, the C-Value, the NC-Value, and a Synonym based approach. These methods were analysed to evaluate performance and quality of results, and to provide a future benchmark. It is shown that Term Frequency and Inverse Document Frequency were the best algorithms, with the Synonym approach following them. Following these findings, a study was undertaken into the value of using human evaluators to judge the outputs. The Synonym method was compared to the original author keyphrases of the Reuters’ News Corpus. The findings show that authors of Reuters’ news articles provide good keyphrases but that more often than not they do not provide any keyphrases.