934 resultados para Recycle and reuse
Resumo:
Design rights represent an interesting example of how the EU legislature has successfully regulated an otherwise heterogeneous field of law. Yet this type of protection is not for all. The tools created by EU intervention have been drafted paying much more attention to the industry sector rather than to designers themselves. In particular, modern, digitally based, individual or small-sized, 3D printing, open designers and their needs are largely neglected by such legislation. There is obviously nothing wrong in drafting legal tools around the needs of an industrial sector with an important role in the EU economy, on the contrary, this is a legitimate and good decision of industrial policy. However, good legislation should be fair, balanced, and (technologically) neutral in order to offer suitable solutions to all the players in the market, and all the citizens in the society, without discriminating the smallest or the newest: the cost would be to stifle innovation. The use of printing machinery to manufacture physical objects created digitally thanks to computer programs such as Computer-Aided Design (CAD) software has been in place for quite a few years, and it is actually the standard in many industrial fields, from aeronautics to home furniture. The change in recent years that has the potential to be a paradigm-shifting factor is a combination between the opularization of such technologies (price, size, usability, quality) and the diffusion of a culture based on access to and reuse of knowledge. We will call this blend Open Design. It is probably still too early, however, to say whether 3D printing will be used in the future to refer to a major event in human history, or instead will be relegated to a lonely Wikipedia entry similarly to ³Betamax² (copyright scholars are familiar with it for other reasons). It is not too early, however, to develop a legal analysis that will hopefully contribute to clarifying the major issues found in current EU design law structure, why many modern open designers will probably find better protection in copyright, and whether they can successfully rely on open licenses to achieve their goals. With regard to the latter point, we will use Creative Commons (CC) licenses to test our hypothesis due to their unique characteristic to be modular, i.e. to have different license elements (clauses) that licensors can choose in order to adapt the license to their own needs.”
Resumo:
ABSTRACT ONTOLOGIES AND METHODS FOR INTEROPERABILITY OF ENGINEERING ANALYSIS MODELS (EAMS) IN AN E-DESIGN ENVIRONMENT SEPTEMBER 2007 NEELIMA KANURI, B.S., BIRLA INSTITUTE OF TECHNOLOGY AND SCIENCES PILANI INDIA M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor Ian Grosse Interoperability is the ability of two or more systems to exchange and reuse information efficiently. This thesis presents new techniques for interoperating engineering tools using ontologies as the basis for representing, visualizing, reasoning about, and securely exchanging abstract engineering knowledge between software systems. The specific engineering domain that is the primary focus of this report is the modeling knowledge associated with the development of engineering analysis models (EAMs). This abstract modeling knowledge has been used to support integration of analysis and optimization tools in iSIGHT FD , a commercial engineering environment. ANSYS , a commercial FEA tool, has been wrapped as an analysis service available inside of iSIGHT-FD. Engineering analysis modeling (EAM) ontology has been developed and instantiated to form a knowledge base for representing analysis modeling knowledge. The instances of the knowledge base are the analysis models of real world applications. To illustrate how abstract modeling knowledge can be exploited for useful purposes, a cantilever I-Beam design optimization problem has been used as a test bed proof-of-concept application. Two distinct finite element models of the I-beam are available to analyze a given beam design- a beam-element finite element model with potentially lower accuracy but significantly reduced computational costs and a high fidelity, high cost, shell-element finite element model. The goal is to obtain an optimized I-beam design at minimum computational expense. An intelligent KB tool was developed and implemented in FiPER . This tool reasons about the modeling knowledge to intelligently shift between the beam and the shell element models during an optimization process to select the best analysis model for a given optimization design state. In addition to improved interoperability and design optimization, methods are developed and presented that demonstrate the ability to operate on ontological knowledge bases to perform important engineering tasks. One such method is the automatic technical report generation method which converts the modeling knowledge associated with an analysis model to a flat technical report. The second method is a secure knowledge sharing method which allocates permissions to portions of knowledge to control knowledge access and sharing. Both the methods acting together enable recipient specific fine grain controlled knowledge viewing and sharing in an engineering workflow integration environment, such as iSIGHT-FD. These methods together play a very efficient role in reducing the large scale inefficiencies existing in current product design and development cycles due to poor knowledge sharing and reuse between people and software engineering tools. This work is a significant advance in both understanding and application of integration of knowledge in a distributed engineering design framework.
Resumo:
Clinical text understanding (CTU) is of interest to health informatics because critical clinical information frequently represented as unconstrained text in electronic health records are extensively used by human experts to guide clinical practice, decision making, and to document delivery of care, but are largely unusable by information systems for queries and computations. Recent initiatives advocating for translational research call for generation of technologies that can integrate structured clinical data with unstructured data, provide a unified interface to all data, and contextualize clinical information for reuse in multidisciplinary and collaborative environment envisioned by CTSA program. This implies that technologies for the processing and interpretation of clinical text should be evaluated not only in terms of their validity and reliability in their intended environment, but also in light of their interoperability, and ability to support information integration and contextualization in a distributed and dynamic environment. This vision adds a new layer of information representation requirements that needs to be accounted for when conceptualizing implementation or acquisition of clinical text processing tools and technologies for multidisciplinary research. On the other hand, electronic health records frequently contain unconstrained clinical text with high variability in use of terms and documentation practices, and without commitmentto grammatical or syntactic structure of the language (e.g. Triage notes, physician and nurse notes, chief complaints, etc). This hinders performance of natural language processing technologies which typically rely heavily on the syntax of language and grammatical structure of the text. This document introduces our method to transform unconstrained clinical text found in electronic health information systems to a formal (computationally understandable) representation that is suitable for querying, integration, contextualization and reuse, and is resilient to the grammatical and syntactic irregularities of the clinical text. We present our design rationale, method, and results of evaluation in processing chief complaints and triage notes from 8 different emergency departments in Houston Texas. At the end, we will discuss significance of our contribution in enabling use of clinical text in a practical bio-surveillance setting.
Resumo:
This paper describes an infrastructure for the automated evaluation of semantic technologies and, in particular, semantic search technologies. For this purpose, we present an evaluation framework which follows a service-oriented approach for evaluating semantic technologies and uses the Business Process Execution Language (BPEL) to define evaluation workflows that can be executed by process engines. This framework supports a variety of evaluations, from different semantic areas, including search, and is extendible to new evaluations. We show how BPEL addresses this diversity as well as how it is used to solve specific challenges such as heterogeneity, error handling and reuse
Resumo:
Nearly 3000 slaughterhouses (74% of them public facilities) were built in Spain during the last decades of the nineteenth century and the first half of the twentieth century. The need to comply with new technical requirements and regulations on the hygiene of the meat passed in the 70s and the gradual replacement of public facilities by larger and more modern private slaughterhouses have subsequently led to the closure and abandonment of many of these buildings. Public slaughterhouses generally consisted of several single-storey and open-plan buildings located around a courtyard. Although originally they were preferably located on the outskirts of the towns, many slaughterhouses are now placed inside the built up areas, due to the urban development. The present work aims to contribute to a better understanding of these agro-industrial buildings and to provide ideas for their conservation and reuse. A review on the historical evolution and the architectural features of the public slaughterhouses in Spain is presented and different examples of old vacant slaughterhouses reused to accommodate libraries, offices, community centres, exhibition halls or sports centres, among others, are shown in the paper.
Resumo:
New digital artifacts are emerging in data-intensive science. For example, scientific workflows are executable descriptions of scientific procedures that define the sequence of computational steps in an automated data analysis, supporting reproducible research and the sharing and replication of best-practice and know-how through reuse. Workflows are specified at design time and interpreted through their execution in a variety of situations, environments, and domains. Hence it is essential to preserve both their static and dynamic aspects, along with the research context in which they are used. To achieve this, we propose the use of multidimensional digital objects (Research Objects) that aggregate the resources used and/or produced in scientific investigations, including workflow models, provenance of their executions, and links to the relevant associated resources, along with the provision of technological support for their preservation and efficient retrieval and reuse. In this direction, we specified a software architecture for the design and implementation of a Research Object preservation system, and realized this architecture with a set of services and clients, drawing together practices in digital libraries, preservation systems, workflow management, social networking and Semantic Web technologies. In this paper, we describe the backbone system of this realization, a digital library system built on top of dLibra.
Resumo:
Workflow reuse is a major benefit of workflow systems and shared workflow repositories, but there are barely any studies that quantify the degree of reuse of workflows or the practical barriers that may stand in the way of successful reuse. In our own work, we hypothesize that defining workflow fragments improves reuse, since end-to-end workflows may be very specific and only partially reusable by others. This paper reports on a study of the current use of workflows and workflow fragments in labs that use the LONI Pipeline, a popular workflow system used mainly for neuroimaging research that enables users to define and reuse workflow fragments. We present an overview of the benefits of workflows and workflow fragments reported by users in informal discussions. We also report on a survey of researchers in a lab that has the LONI Pipeline installed, asking them about their experiences with reuse of workflow fragments and the actual benefits they perceive. This leads to quantifiable indicators of the reuse of workflows and workflow fragments in practice. Finally, we discuss barriers to further adoption of workflow fragments and workflow reuse that motivate further work.
Resumo:
The reclamation, treatment and reuse of municipal wastewater can provide important environmental benefits. In this paper, 25 studies on this topic were reviewed and it was found that there are many (\textgreater150) different drivers acting for and against wastewater recycling. To deal with the challenge of comparing studies which entailed different research designs, a framework was developed which allowed the literature to be organized into comparable study contexts. Studies were categorized according to the level of analysis (wastewater recycling scheme, city, water utility, state, country, global) and outcome investigated (development/investment in new schemes, program implementation, percentage of wastewater recycled, percentage of water demand covered by recycled water, multiple outcomes). Findings across comparable case studies were then grouped according to the type (for or against recycling) and category of driver (social, natural, technical, economic, policy or business). The utility of the framework is demonstrated by summarizing the findings from four Australian studies at the city level. The framework offers a unique approach for disentangling the broad range of potential drivers for and against water recycling and to focus on those that seem relevant in specific study contexts. It may offer a valuable starting point for building hypotheses in future work.
Resumo:
In order to address problems of information overload in digital imagery task domains we have developed an interactive approach to the capture and reuse of image context information. Our framework models different aspects of the relationship between images and domain tasks they support by monitoring the interactive manipulation and annotation of task-relevant imagery. The approach allows us to gauge a measure of a user's intentions as they complete goal-directed image tasks. As users analyze retrieved imagery their interactions are captured and an expert task context is dynamically constructed. This human expertise, proficiency, and knowledge can then be leveraged to support other users in carrying out similar domain tasks. We have applied our techniques to two multimedia retrieval applications for two different image domains, namely the geo-spatial and medical imagery domains. © Springer-Verlag Berlin Heidelberg 2007.
Resumo:
* The work is partially suported by Russian Foundation for Basic Studies (grant 02-01-00466).
Resumo:
A dolgozat a visszutas logisztikát, az újrahasznosítást igyekszik beilleszteni a vállalati termeléstervezés keretei közé. A szükséglettervezési rendszerek (material requirements planning, MRP) célja a készletek és beszerzendő anyagok, alkatrészek időben ütemezett gyártásának és beszerzésének megtervezése. A klasszikus MRP rendszereket az utóbbi időben próbálja a tudomány az újrahasznosítással kibővíteni. Mivel ebben az esetben az új, és újrafelhasználható anyagokat külön kell nyilvántartani, ezért az MRP-táblák és készletek növekednek. A rendelési tételnagyságok meghatározása is nehezebb, összetettebb tételnagysághoz vezet. A dolgozatban egy visszutas logisztikai készletmodellt ismertetünk, valamint annak dinamikus kiterjesztését, amely alapja lehet az SAP-ba beépíthető rendelés állomány meghatározó heurisztikának. ____ The aim of the paper is to extend production planning with reverse logistics and reuse. Material requirements planning (MRP) systems plan and control invetory levels and purchasing activities of the firm. In the last decade scientists on this field try to involve reverse logistics activities in MRP systems. Size of MRP-tables is growing in this case because of the alternative use of newly purchased products and reusable old items. Determination of order quantities will be more complex with these two modes of material supplies. An EOQ-type reverse logistics model is presented in the paper with a dynamic lot size generalization. The generalized model can be seen as a basic model to build in production planning and control system like SAP.
Resumo:
Part 4: Transition Towards Product-Service Systems
Resumo:
Despite the recent synthesis and identification of a diverse set of new nanophotocatalysts that has exploded recently, titanium dioxide (TiO2) remains among the most promising photocatalysts because it is inexpensive, non-corrosive, environmentally friendly, and stable under a wide range of conditions. TiO2 has shown excellent promise for solar cell applications and for remediation of chemical pollutants and toxins. Over the past few decades, there has been a tremendous development of nanophotocatalysts for a variety of industrial applications (i.e. for water purification and reuse, disinfection of water matrices, air purification, deodorization, sterilization of soils). This paper details traditional and new industrial routes for the preparation of nanophotocatalysts and the characterization techniques used to understand the physical chemical properties of them, like surface area, ζ potential, crystal size, and phase crystallographic, morphology, and optical transparency. Finally we present some applications of the industrial nanophotocatalysts.
Resumo:
The International Space Station (ISS) requires a substantial amount of potable water for use by the crew. The economic and logistic limitations of transporting the vast amount of water required onboard the ISS necessitate onboard recovery and reuse of the aqueous waste streams. Various treatment technologies are employed within the ISS water processor to render the waste water potable, including filtration, ion exchange, adsorption, and catalytic wet oxidation. The ion exchange resins and adsorption media are combined in multifiltration beds for removal of ionic and organic compounds. A mathematical model (MFBMODEL™) designed to predict the performance of a multifiltration (MF) bed was developed. MFBMODEL consists of ion exchange models for describing the behavior of the different resin types in a MF bed (e.g., mixed bed, strong acid cation, strong base anion, and weak base anion exchange resins) and an adsorption model capable of predicting the performance of the adsorbents in a MF bed. Multicomponent ion exchange ii equilibrium models that incorporate the water formation reaction, electroneutrality condition, and degree of ionization of weak acids and bases for mixed bed, strong acid cation, strong base anion, and weak base anion exchange resins were developed and verified. The equilibrium models developed use a tanks-inseries approach that allows for consideration of variable influent concentrations. The adsorption modeling approach was developed in related studies and application within the MFBMODEL framework was demonstrated in the Appendix to this study. MFBMODEL consists of a graphical user interface programmed in Visual Basic and Fortran computational routines. This dissertation shows MF bed modeling results in which the model is verified for a surrogate of the ISS waste shower and handwash stream. In addition, a multicomponent ion exchange model that incorporates mass transfer effects was developed, which is capable of describing the performance of strong acid cation (SAC) and strong base anion (SBA) exchange resins, but not including reaction effects. This dissertation presents results showing the mass transfer model's capability to predict the performance of binary and multicomponent column data for SAC and SBA exchange resins. The ion exchange equilibrium and mass transfer models developed in this study are also applicable to terrestrial water treatment systems. They could be applied for removal of cations and anions from groundwater (e.g., hardness, nitrate, perchlorate) and from industrial process waters (e.g. boiler water, ultrapure water in the semiconductor industry).
Resumo:
Our study focused on Morocco investigating the dissemination of PBs amongst farmers belonging to the first pillar of the GMP, located in the Fès-Meknès region. As well as to assess how innovation adoption is influenced by the network of relationships that various farmers are involved in. We adopted an “ego network” approach to identify the primary stakeholders responsible for the diffusion of PBs. We collected data through “face-to-face” interviews with 80 farmers in April and May 2021. The data were processed with the aim of: 1) analysing the total number of main and specific topics discussed between egos and egos’ alters regarding the variation of some egos attributes; 2) analysing egos’ network characteristics using E-Net software, and 3) identifying the significant variables that influence farmers to access knowledge, use and reuse of PBs a Binary Logistic Regression (LR) was applied. The first result disclosed that the main PBs topics discussed were technical positioning, the need to use PBs, knowledge of PBs, and organic PBs. We noted that farmers have specific features: they have a high school diploma and a bachelor's degree; they are specialised in fruits and cereals farming, and they are managers and members of a professional organisation. The second result showed results of SNA: 1) PBs seem to become generally a common argument for farmers who have already exchanged fertiliser information with their alters; 2) we disclosed a moderate heterogeneity in the networks, farmers have access to information mainly from acquaintances and professionals, and 3) we revealed that networks have a relatively low density and alters are not tightly connected to each other. Farmers have a brokerage position in the networks controlling the flow of information about the PBs. LR revealed that both the farmers’ attributes and the networks’ characteristics influence growers to know, use and reuse PBs.