982 resultados para Web-Resource
Resumo:
Recently, many efforts have been made in the academic world to adapt the new degrees to the new European Higher Education Area (EHEA). New technologies have been the most important factor to carry out this adaptation. In particular, the tools 2.0 have been spreading quickly, not just the Web 2.0, but even in all the educational levels. Nevertheless, it is now necessary to evaluate whether all these efforts and all the changes, carried out in order to obtain improved academic performance among students, have provided good results. Therefore, the aim of this paper is focused on studying the impact of the implementation of information and communication technologies (ICTs) in a subject belonging to a Master from the University of Alicante in the academic year (2010-2011). In special, it is an elective course called "Advanced Visual Ergonomics" from the Master of Clinical Optometry and Vision. The methodology used to teach this course differs from the traditional one in many respects. For example, one of the resources used for the development of this course is a blog developed specifically to coordinate a series of virtual works, whose purpose is that the student goes into specific aspects of the current topic. Next, the student participates in an active role by writing a personal assessment on the blog. However, in the course planning, there is an attendance to lessons, where the teacher presents certain issues in a more traditional way, that is, with a lecture supported with audiovisual materials, such as materials generated in powerpoint. To evaluate the quality of the results achieved with this methodology, in this work the personal assessment of the students, who have completed this course during this academic year, are collected. In particular, we want to know their opinion about the used resources, as well as the followed methodology. The tool used to collect this information was a questionnaire. This questionnaire evaluates different aspects of the course: a general opinion, quality of the received information, satisfaction about the followed methodology and the student´s critical awareness. The design of this questionnaire is very important to get conclusive information about the methodology followed in the course. The questionnaire has to have an adequate number of questions; whether it has many questions, it might be boring for the student who would pay no enough attention. The questions should be well-written, with a clear structure and message, to avoid confusion and an ambiguity. The questions should be objectives, without any suggestion for a desired answer. In addition, the questionnaire should be interesting to encourage the student´ s interest. In conclusion, this questionnaire developed for this subject provided good information to evaluate whether the methodology was a useful tool to teach "Advanced Visual Ergonomics". Furthermore, the student´s opinion collected by this questionnaire might be very helpful to improve this didactic resource.
Resumo:
In the EU, resource efficiency has been high on the political agenda since 2011, when the European Commission first included it as one of the seven flagship initiatives in its Europe 2020 Strategy for “smart, sustainable and inclusive growth”. Resource efficiency is not only considered an environmental necessity, but also a political, economic and security opportunity. This paper first stresses the benefits and opportunities for the EU of improving its resource efficiency. It then explains the added value of the www.measuring-progress.eu web tool, which aims to improve the way policy-makers and others involved in the policy process can access, understand and use indicators for resource efficiency. It provides practical examples of relevant indicators in the form of the EU Resource Efficiency Scoreboard and a case study showing how the web tool established by NETGREEN can be used in practice. The paper concludes with a number of policy messages.
Resumo:
EPA pub. number SW-10P, per NSCEP's publication title list.
Resumo:
Coverage as of 2/20/03: 1997.
Resumo:
This paper reports on an ongoing partnership between Queensland University of Technology and Volunteering Queensland regarding the development and revision of a website for community leaders. The website, designed in late 2003, was established to provide a range of learning activities for community leaders including a problem based learning activity, case studies of community leaders and a range of resources deemed significant for leaders in the community. To date, anecdotal evidence as well as some more hard evidence (i.e. number of visits to the site), indicates that the site appears to be a valuable resource for community leaders. The purpose of this paper was firstly to investigate the utility of the site and secondly to consider some bigger issues concerning its sustainability. To achieve this, the paper explores the perceptions of (i) a group of community leaders regarding the strengths and weaknesses of the site; and (ii) key stakeholders (from QUT and Volunteering Queensland) who participated in a focus group discussion to consider important issues relating to its management and sustainability. Themes emerging from the two groups are provided and implications for small scale partnership projects such as this one are discussed.
Resumo:
The Semantic Web relies on carefully structured, well defined, data to allow machines to communicate and understand one another. In many domains (e.g. geospatial) the data being described contains some uncertainty, often due to incomplete knowledge; meaningful processing of this data requires these uncertainties to be carefully analysed and integrated into the process chain. Currently, within the SemanticWeb there is no standard mechanism for interoperable description and exchange of uncertain information, which renders the automated processing of such information implausible, particularly where error must be considered and captured as it propagates through a processing sequence. In particular we adopt a Bayesian perspective and focus on the case where the inputs / outputs are naturally treated as random variables. This paper discusses a solution to the problem in the form of the Uncertainty Markup Language (UncertML). UncertML is a conceptual model, realised as an XML schema, that allows uncertainty to be quantified in a variety of ways i.e. realisations, statistics and probability distributions. UncertML is based upon a soft-typed XML schema design that provides a generic framework from which any statistic or distribution may be created. Making extensive use of Geography Markup Language (GML) dictionaries, UncertML provides a collection of definitions for common uncertainty types. Containing both written descriptions and mathematical functions, encoded as MathML, the definitions within these dictionaries provide a robust mechanism for defining any statistic or distribution and can be easily extended. Universal Resource Identifiers (URIs) are used to introduce semantics to the soft-typed elements by linking to these dictionary definitions. The INTAMAP (INTeroperability and Automated MAPping) project provides a use case for UncertML. This paper demonstrates how observation errors can be quantified using UncertML and wrapped within an Observations & Measurements (O&M) Observation. The interpolation service uses the information within these observations to influence the prediction outcome. The output uncertainties may be encoded in a variety of UncertML types, e.g. a series of marginal Gaussian distributions, a set of statistics, such as the first three marginal moments, or a set of realisations from a Monte Carlo treatment. Quantifying and propagating uncertainty in this way allows such interpolation results to be consumed by other services. This could form part of a risk management chain or a decision support system, and ultimately paves the way for complex data processing chains in the Semantic Web.
Resumo:
In this paper the key features of a two-layered model for describing the semantic of dynamical web resources are introduced. In the current Semantic Web proposal [Berners-Lee et al., 2001] web resources are classified into static ontologies which describes the semantic network of their inter-relationships [Kalianpur, 2001][Handschuh & Staab, 2002] and complex constraints described by logical quantified formula [Boley et al., 2001][McGuinnes & van Harmelen, 2004][McGuinnes et al., 2004], the basic idea is that software agents can use techniques of automatic reasoning in order to relate resources and to support sophisticated web application. On the other hand, web resources are also characterized by their dynamical aspects, which are not adequately addressed by current web models. Resources on the web are dynamical since, in the minimal case, they can appear or disappear from the web and their content is upgraded. In addition, resources can traverse different states, which characterized the resource life-cycle, each resource state corresponding to different possible uses of the resource. Finally most resources are timed, i.e. they information they provide make sense only if contextualised with respect to time, and their validity and accuracy is greatly bounded by time. Temporal projection and deduction based on dynamical and time constraints of the resources can be made and exploited by software agents [Hendler, 2001] in order to make previsions about the availability and the state of a resource, for deciding when consulting the resource itself or in order to deliberately induce a resource state change for reaching some agent goal, such as in the automated planning framework [Fikes & Nilsson, 1971][Bacchus & Kabanza,1998].
Resumo:
The manufacturing industry faces many challenges such as reducing time-to-market and cutting costs. In order to meet these increasing demands, effective methods are need to support the early product development stages by bridging the gap of communicating early design ideas and the evaluation of manufacturing performance. This paper introduces methods of linking design and manufacturing domains using disparate technologies. The combined technologies include knowledge management supporting for product lifecycle management systems, Enterprise Resource Planning (ERP) systems, aggregate process planning systems, workflow management and data exchange formats. A case study has been used to demonstrate the use of these technologies, illustrated by adding manufacturing knowledge to generate alternative early process plan which are in turn used by an ERP system to obtain and optimise a rough-cut capacity plan. Copyright © 2010 Inderscience Enterprises Ltd.
Defining the role of floating periphyton mats in shaping food-web dynamics in the Florida Everglades
Resumo:
Expansive periphyton mats are a striking characteristic of the Florida Everglades. Floating periphyton mats are home to a diverse macroinvertebrate community dominated by chironomid and ceratopogonid larvae and amphipods that use the mat as both a food resource and refuge from predation. While this periphyton complex functions as a self-organizing system, it also serves as a base for trophic interactions with larger organisms. The purpose of my research was to quantify variation in the macroinvertebrate community inhabiting floating periphyton mats, describe the role of mats in shaping food-web dynamics, and describe how these trophic interactions change with eutrophication. ^ I characterized the macroinvertebrate community inhabiting periphyton through a wet-season by describing spatial variation on scales from 0.2 m to 3 km. Floating periphyton mats contained a diverse macroinvertebrate community, with greater taxonomic richness and higher densities of many taxa than adjacent microhabitats. Macroinvertebrate density increased through the wet season as periphyton mats developed. While some variation was noted among sites, spatial patterns were not observed on smaller scales. I also sampled ten sites representing gradients of hydroperiod and nutrient (P) levels. The density of macroinvertebrates inhabiting periphyton mats increased with increasing P availability; however, short-hydroperiod P-enriched sites had the highest macroinvertebrate density. This pattern suggests a synergistic interaction of top-down and bottom-up effects. In contrast, macroinvertebrate density was lower in benthic floc, where it was negatively correlated with hydroperiod. ^ I used two types of mesocosms (field cages and tanks) to manipulate large consumers (fish and grass shrimp) with inclusion/exclusion cages over an experimental P gradient. In most cases, periphyton mats served as an effective predation refuge. Macroinvertebrates were consumed more frequently in P-enriched treatments, where mats were also heavily grazed. Macroinvertebrate densities decreased with increasing P in benthic floc, but increased with enrichment in periphyton mats until levels were reached that caused disassociation of the mat. ^ This research documents several indirect trophic interactions that can occur in complex habitats, and emphasizes the need to characterize dynamics of all microhabitats to fully describe the dynamics of an ecosystem. ^
Resumo:
Stable isotope analysis has emerged as one of the primary means for examining the structure and dynamics of food webs, and numerous analytical approaches are now commonly used in the field. Techniques range from simple, qualitative inferences based on the isotopic niche, to Bayesian mixing models that can be used to characterize food-web structure at multiple hierarchical levels. We provide a comprehensive review of these techniques, and thus a single reference source to help identify the most useful approaches to apply to a given data set. We structure the review around four general questions: (1) what is the trophic position of an organism in a food web?; (2) which resource pools support consumers?; (3) what additional information does relative position of consumers in isotopic space reveal about food-web structure?; and (4) what is the degree of trophic variability at the intrapopulation level? For each general question, we detail different approaches that have been applied, discussing the strengths and weaknesses of each. We conclude with a set of suggestions that transcend individual analytical approaches, and provide guidance for future applications in the field.
Resumo:
We evaluated metacommunity hypotheses of landscape arrangement (indicative of dispersal limitation) and environmental gradients (hydroperiod and nutrients) in structuring macroinvertebrate and fish communities in the southern Everglades. We used samples collected at sites from the eastern boundary of the southern Everglades and from Shark River Slough, to evaluate the role of these factors in metacommunity structure. We used eigenfunction spatial analysis to model community structure among sites and distance-based redundancy analysis to partition the variability in communities between spatial and environmental filters. For most animal communities, hydrological parameters had a greater influence on structure than nutrient enrichment, however both had large effects. The influence of spatial effects indicative of dispersal limitation was weak and only periphyton infauna appeared to be limited by regional dispersal. At the landscape scale, communities were well-mixed, but strongly influenced by hydrology. Local-scale species dominance was influenced by water-permanence and nutrient enrichment. Nutrient enrichment is limited to water inflow points associated with canals, which may explain its impact in this data set. Hydroperiod and nutrient enrichment are controlled by water managers; our analysis indicates that the decisions they make have strong effects on the communities at the base of the Everglades food web.
Resumo:
The effective control of production activities in dynamic job shop with predetermined resource allocation for all the jobs entering the system is a unique manufacturing environment, which exists in the manufacturing industry. In this thesis a framework for an Internet based real time shop floor control system for such a dynamic job shop environment is introduced. The system aims to maintain the schedule feasibility of all the jobs entering the manufacturing system under any circumstance. The system is capable of deciding how often the manufacturing activities should be monitored to check for control decisions that need to be taken on the shop floor. The system will provide the decision maker real time notification to enable him to generate feasible alternate solutions in case a disturbance occurs on the shop floor. The control system is also capable of providing the customer with real time access to the status of the jobs on the shop floor. The communication between the controller, the user and the customer is through web based user friendly GUI. The proposed control system architecture and the interface for the communication system have been designed, developed and implemented.
Resumo:
This research is investigating the claim that Change Data Capture (CDC) technologies capture data changes in real-time. Based on theory, our hypothesis states that real-time CDC is not achievable with traditional approaches (log scanning, triggers and timestamps). Traditional approaches to CDC require a resource to be polled, which prevents true real-time CDC. We propose an approach to CDC that encapsulates the data source with a set of web services. These web services will propagate the changes to the targets and eliminate the need for polling. Additionally we propose a framework for CDC technologies that allow changes to flow from source to target. This paper discusses current CDC technologies and presents the theory about why they are unable to deliver changes in real-time. Following, we discuss our web service approach to CDC and accompanying framework, explaining how they can produce real-time CDC. The paper concludes with a discussion on the research required to investigate the real-time capabilities of CDC technologies. © 2010 IEEE.
Resumo:
Provenance is a record that describes the people, institutions, entities, and activities, involved in producing, influencing, or delivering a piece of data or a thing in the world. Some 10 years after beginning research on the topic of provenance, I co-chaired the provenance working group at the World Wide Web Consortium. The working group published the PROV standard for provenance in 2013. In this talk, I will present some use cases for provenance, the PROV standard and some flagship examples of adoption. I will then move on to our current research area aiming to exploit provenance, in the context of the Sociam, SmartSociety, ORCHID projects. Doing so, I will present techniques to deal with large scale provenance, to build predictive models based on provenance, and to analyse provenance.