193 resultados para Portability
Resumo:
Leaf wetness duration (LWD) models based on empirical approaches offer practical advantages over physically based models in agricultural applications, but their spatial portability is questionable because they may be biased to the climatic conditions under which they were developed. In our study, spatial portability of three LWD models with empirical characteristics - a RH threshold model, a decision tree model with wind speed correction, and a fuzzy logic model - was evaluated using weather data collected in Brazil, Canada, Costa Rica, Italy and the USA. The fuzzy logic model was more accurate than the other models in estimating LWD measured by painted leaf wetness sensors. The fraction of correct estimates for the fuzzy logic model was greater (0.87) than for the other models (0.85-0.86) across 28 sites where painted sensors were installed, and the degree of agreement k statistic between the model and painted sensors was greater for the fuzzy logic model (0.71) than that for the other models (0.64-0.66). Values of the k statistic for the fuzzy logic model were also less variable across sites than those of the other models. When model estimates were compared with measurements from unpainted leaf wetness sensors, the fuzzy logic model had less mean absolute error (2.5 h day(-1)) than other models (2.6-2.7 h day(-1)) after the model was calibrated for the unpainted sensors. The results suggest that the fuzzy logic model has greater spatial portability than the other models evaluated and merits further validation in comparison with physical models under a wider range of climate conditions. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Among the types of remote sensing acquisitions, optical images are certainly one of the most widely relied upon data sources for Earth observation. They provide detailed measurements of the electromagnetic radiation reflected or emitted by each pixel in the scene. Through a process termed supervised land-cover classification, this allows to automatically yet accurately distinguish objects at the surface of our planet. In this respect, when producing a land-cover map of the surveyed area, the availability of training examples representative of each thematic class is crucial for the success of the classification procedure. However, in real applications, due to several constraints on the sample collection process, labeled pixels are usually scarce. When analyzing an image for which those key samples are unavailable, a viable solution consists in resorting to the ground truth data of other previously acquired images. This option is attractive but several factors such as atmospheric, ground and acquisition conditions can cause radiometric differences between the images, hindering therefore the transfer of knowledge from one image to another. The goal of this Thesis is to supply remote sensing image analysts with suitable processing techniques to ensure a robust portability of the classification models across different images. The ultimate purpose is to map the land-cover classes over large spatial and temporal extents with minimal ground information. To overcome, or simply quantify, the observed shifts in the statistical distribution of the spectra of the materials, we study four approaches issued from the field of machine learning. First, we propose a strategy to intelligently sample the image of interest to collect the labels only in correspondence of the most useful pixels. This iterative routine is based on a constant evaluation of the pertinence to the new image of the initial training data actually belonging to a different image. Second, an approach to reduce the radiometric differences among the images by projecting the respective pixels in a common new data space is presented. We analyze a kernel-based feature extraction framework suited for such problems, showing that, after this relative normalization, the cross-image generalization abilities of a classifier are highly increased. Third, we test a new data-driven measure of distance between probability distributions to assess the distortions caused by differences in the acquisition geometry affecting series of multi-angle images. Also, we gauge the portability of classification models through the sequences. In both exercises, the efficacy of classic physically- and statistically-based normalization methods is discussed. Finally, we explore a new family of approaches based on sparse representations of the samples to reciprocally convert the data space of two images. The projection function bridging the images allows a synthesis of new pixels with more similar characteristics ultimately facilitating the land-cover mapping across images.
Resumo:
In this study we propose an evaluation of the angular effects altering the spectral response of the land-cover over multi-angle remote sensing image acquisitions. The shift in the statistical distribution of the pixels observed in an in-track sequence of WorldView-2 images is analyzed by means of a kernel-based measure of distance between probability distributions. Afterwards, the portability of supervised classifiers across the sequence is investigated by looking at the evolution of the classification accuracy with respect to the changing observation angle. In this context, the efficiency of various physically and statistically based preprocessing methods in obtaining angle-invariant data spaces is compared and possible synergies are discussed.
Resumo:
This paper argues the need for the information communication technology (ICT), labor exchange (job boards), and Human Capital ontology engineers (ontoEngineers) to jointly design and socialize an upper level meta-ontology for people readiness and career portability. These enticing ontology research topics have yielded "independent" results, but have yet to meet the more broader or "universal" requirement that emerging frameworks demand. This paper will focus on the need to universally develop an upper level ontology and provide the reader concepts and models that can be transformed into marketable solutions.
Resumo:
The use of the Internet as a means of ensuring greater visibility for products, services and information offered by companies is gaining strength in recent decades. However, it is known that to ensure satisfaction and subsequent virtual customer loyalty, it is necessary to guarantee the quality of the websites, allowing indiscriminate access regardless of the resources used, as well as rapid responses to possible requests. In order to assist this process, this paper presents a set of guidelines for the development of websites having quality characteristics, efficiency and portability as per ISO 9126 norms. An observational analysis of e-commerce websites was done which showed that they are inadequate as to the proposed guidelines, making them difficult to access available content. Therefore, the adoption of the proposed guidelines can greatly contribute to increasing the quality of websites and, consequently, enable quick and effective access regardless of the resources used.
Resumo:
Application of biogeochemical models to the study of marine ecosystems is pervasive, yet objective quantification of these models' performance is rare. Here, 12 lower trophic level models of varying complexity are objectively assessed in two distinct regions (equatorial Pacific and Arabian Sea). Each model was run within an identical one-dimensional physical framework. A consistent variational adjoint implementation assimilating chlorophyll-a, nitrate, export, and primary productivity was applied and the same metrics were used to assess model skill. Experiments were performed in which data were assimilated from each site individually and from both sites simultaneously. A cross-validation experiment was also conducted whereby data were assimilated from one site and the resulting optimal parameters were used to generate a simulation for the second site. When a single pelagic regime is considered, the simplest models fit the data as well as those with multiple phytoplankton functional groups. However, those with multiple phytoplankton functional groups produced lower misfits when the models are required to simulate both regimes using identical parameter values. The cross-validation experiments revealed that as long as only a few key biogeochemical parameters were optimized, the models with greater phytoplankton complexity were generally more portable. Furthermore, models with multiple zooplankton compartments did not necessarily outperform models with single zooplankton compartments, even when zooplankton biomass data are assimilated. Finally, even when different models produced similar least squares model-data misfits, they often did so via very different element flow pathways, highlighting the need for more comprehensive data sets that uniquely constrain these pathways.
Resumo:
In a communication to the Parliament and the Council entitled “Towards a modern, more European copyright framework” and dated 9 December 2015,1 the European Commission confirmed its intention to progressively remove the main obstacles to the functioning of the Digital Single Market for copyrighted works. The first step of this long-term plan, which was first announced in Juncker’s Political Guidelines2 and the Communication on “A Digital Single Market strategy for Europe”,3 is a proposal for a regulation aimed at ensuring the so-called ‘cross-border portability’ of online services giving access to content such as music, games, films and sporting events.4 In a nutshell, the proposed regulation seeks to enable consumers with legal access to such online content services in their country of residence to use the same services also when they are in another member state for a limited period of time. On the one hand, this legislative proposal has the full potential to resolve the (limited) issue of portability, which stems from the national dimension of copyright and the persisting territorial licensing and distribution of copyright content.5 On the other hand, as this commentary shows, the ambiguity of certain important provisions in the proposed regulation might affect its scope and effectiveness and contribute to the erosion of the principle of copyright territoriality.
Resumo:
"January 1981."
Resumo:
"Prepared under contract J-9-P-8-0191, Pension and Welfare Benefit Programs."
Resumo:
"Pension and Welfare Benefit Programs. Prepared under Contract J-9-P-8- 0191."
Resumo:
Presented at the Society's annual meeting in Hartford, Conn., July 6, 1967, by Andrew A. Melgard, Charles E. Tosch and Frank Cummings.
Resumo:
"For release on delivery; expected ... July 12, 1988."
Resumo:
The General Data Protection Regulation (GDPR) has been designed to help promote a view in favor of the interests of individuals instead of large corporations. However, there is the need of more dedicated technologies that can help companies comply with GDPR while enabling people to exercise their rights. We argue that such a dedicated solution must address two main issues: the need for more transparency towards individuals regarding the management of their personal information and their often hindered ability to access and make interoperable personal data in a way that the exercise of one's rights would result in straightforward. We aim to provide a system that helps to push personal data management towards the individual's control, i.e., a personal information management system (PIMS). By using distributed storage and decentralized computing networks to control online services, users' personal information could be shifted towards those directly concerned, i.e., the data subjects. The use of Distributed Ledger Technologies (DLTs) and Decentralized File Storage (DFS) as an implementation of decentralized systems is of paramount importance in this case. The structure of this dissertation follows an incremental approach to describing a set of decentralized systems and models that revolves around personal data and their subjects. Each chapter of this dissertation builds up the previous one and discusses the technical implementation of a system and its relation with the corresponding regulations. We refer to the EU regulatory framework, including GDPR, eIDAS, and Data Governance Act, to build our final system architecture's functional and non-functional drivers. In our PIMS design, personal data is kept in a Personal Data Space (PDS) consisting of encrypted personal data referring to the subject stored in a DFS. On top of that, a network of authorization servers acts as a data intermediary to provide access to potential data recipients through smart contracts.