955 resultados para Reuse
Resumo:
This book is a compilation of 5th year architecture dissertations dealing with the value of the existing built fabric of Belfast city center and the potential reuse of that fabric. This hopefully begins a tradition of disseminating the extensive work students undertake in describing, understanding and analyzing built heritage in Belfast and Northern Ireland. It does not aim to be innovative in its findings nor revolutionary in its position, but hopefully it will be a seed in a quest to get architecture students involved in their city and the value of the existing built fabric.
Resumo:
Inter-component communication has always been of great importance in the design of software architectures and connectors have been considered as first-class entities in many approaches [1][2][3]. We present a novel architectural style that is derived from the well-established domain of computer networks. The style adopts the inter-component communication protocol in a novel way that allows large scale software reuse. It mainly targets real-time, distributed, concurrent, and heterogeneous systems.
Resumo:
Software Product-Line Engineering has emerged in recent years, as an important strategy for maximising reuse within the context of a family of related products. In current approaches to software product-lines, there is general agreement that the definition of a reference-architecture for the product-line is an important step in the software engineering process. In this paper we introduce ADLARS, a new form of architecture Description language that places emphasis on the capture of architectural relationships. ADLARS is designed for use within a product-line engineering process. The language supports both the definition of architectural structure, and of important architectural relationships. In particular it supports capture of the relationships between product features, component and task architectures, interfaces and parameter requirements.
Resumo:
The member states of the European Union are faced with the challenges of handling “big data” as well as with a growing impact of the supranational level. Given that the success of efforts at European level strongly depends on corresponding national and local activities, i.e., the quality of implementation and the degree of consistency, this chapter centers upon the coherence of European strategies and national implementations concerning the reuse of public sector information. Taking the City of Vienna’s open data activities as an illustrative example, we seek an answer to the question whether and to what extent developments at European level and other factors have an effect on local efforts towards open data. We find that the European Commission’s ambitions are driven by a strong economic argumentation, while the efforts of the City of Vienna have only very little to do with the European orientation and are rather dominated by lifestyle and administrative reform arguments. Hence, we observe a decoupling of supranational strategies and national implementation activities. The very reluctant attitude at Austrian federal level might be one reason for this, nationally induced barriers—such as the administrative culture—might be another. In order to enhance the correspondence between the strategies of the supranational level and those of the implementers at national and regional levels, the strengthening of soft law measures could be promising.
Resumo:
This paper investigates the potential for the reuse of Belfast's existing Victorian terraced housing. The aim is to study methods behind retrofitting these unique pieces of architectural heritage, bringing them up to modern day standards with reduced energy costs and CO2 emissions in line with the Climate Change Act of 2008 (‘the Act’). It also highlights the characteristics of sustainable retrofitting examples and original prefabricated element, which enable the 19th-century properties to be re-adapted to suit modern day needs. The analysis builds on a report by Mark Hines Architects, in association with SAVE Britain's Heritage,1 in which the company explains the detrimental effect that the ‘Pathfinder’ scheme has had on English cities. Similarly, in Belfast, redevelopment schemes such as that in the ‘Village’ district have intended to replace undervalued terraced housing stock, and search for more sustainable options to retain these homes along with with the embodied energy and traditions attached to them.
Resumo:
The construction industry in Northern Ireland is one of the major contributors of construction waste to landfill each year. The aim of this research paper is to identify the core on-site management causes of material waste on construction sites in Northern Ireland and to illustrate various methods of prevention which can be adopted. The research begins with a detailed literature review and is complemented with the conduction of semi-structured interviews with 6 professionals who are experienced and active within the Northern Ireland construction industry. Following on from the literature review and interviews analysis, a questionnaire survey is developed to obtain further information in relation to the subject area. The questionnaire is based on the key findings of the previous stages to direct the research towards the most influential factors. The analysis of the survey responses reveals that the core causes of waste generation include a rushed program, poor handling and on-site damage of materials, while the principal methods of prevention emerge as the adequate storage, the reuse of material on-site and efficient material ordering. Furthermore, the role of the professional background in the shaping of perceptions relevant to waste management is also investigated and significant differences are identified. The findings of this research are beneficial for the industry as they enhance the understanding of construction waste generation causes and highlight the practices required to reduce waste on-site in the context of sustainable development.
Resumo:
This paper uses discrete choice models, supported by GIS data, to analyse the National Land Use Database, a register of more than 21,000 English brownfields - previously used sites with or without contamination that are currently unused or underused. Using spatial discrete choice models, including the first application of a spatial probit latent class model with class-specific neighbourhood effects, we find evidence of large local differences in the determinants of brownfields redevelopment in England and that the reuse decisions of adjacent sites affect the reuse of a site. We also find that sites with a history of industrial activities, large sites, and sites that are located in the poorest and bleakest areas of cities and regions of England are more difficult to redevelop. In particular, we find that the probability of reusing a brownfield increases by up to 8.5% for a site privately owned compared to a site publicly owned and between 15% - 30% if a site is located in London compared to the North West of England. We suggest that local tailored policies are more suitable than regional or national policies to boost the reuse of brownfield sites.
Resumo:
We consider the problem of segmenting text documents that have a
two-part structure such as a problem part and a solution part. Documents
of this genre include incident reports that typically involve
description of events relating to a problem followed by those pertaining
to the solution that was tried. Segmenting such documents
into the component two parts would render them usable in knowledge
reuse frameworks such as Case-Based Reasoning. This segmentation
problem presents a hard case for traditional text segmentation
due to the lexical inter-relatedness of the segments. We develop
a two-part segmentation technique that can harness a corpus
of similar documents to model the behavior of the two segments
and their inter-relatedness using language models and translation
models respectively. In particular, we use separate language models
for the problem and solution segment types, whereas the interrelatedness
between segment types is modeled using an IBM Model
1 translation model. We model documents as being generated starting
from the problem part that comprises of words sampled from
the problem language model, followed by the solution part whose
words are sampled either from the solution language model or from
a translation model conditioned on the words already chosen in the
problem part. We show, through an extensive set of experiments on
real-world data, that our approach outperforms the state-of-the-art
text segmentation algorithms in the accuracy of segmentation, and
that such improved accuracy translates well to improved usability
in Case-based Reasoning systems. We also analyze the robustness
of our technique to varying amounts and types of noise and empirically
illustrate that our technique is quite noise tolerant, and
degrades gracefully with increasing amounts of noise
Resumo:
The past decade had witnessed an unprecedented growth in the amount of available digital content, and its volume is expected to continue to grow the next few years. Unstructured text data generated from web and enterprise sources form a large fraction of such content. Many of these contain large volumes of reusable data such as solutions to frequently occurring problems, and general know-how that may be reused in appropriate contexts. In this work, we address issues around leveraging unstructured text data from sources as diverse as the web and the enterprise within the Case-based Reasoning framework. Case-based Reasoning (CBR) provides a framework and methodology for systematic reuse of historical knowledge that is available in the form of problemsolution
pairs, in solving new problems. Here, we consider possibilities of enhancing Textual CBR systems under three main themes: procurement, maintenance and retrieval. We adapt and build upon the stateof-the-art techniques from data mining and natural language processing in addressing various challenges therein. Under procurement, we investigate the problem of extracting cases (i.e., problem-solution pairs) from data sources such as incident/experience
reports. We develop case-base maintenance methods specifically tuned to text targeted towards retaining solutions such that the utility of the filtered case base in solving new problems is maximized. Further, we address the problem of query suggestions for textual case-bases and show that exploiting the problem-solution partition can enhance retrieval effectiveness by prioritizing more useful query suggestions. Additionally, we illustrate interpretable clustering as a tool to drill-down to domain specific text collections (since CBR systems are usually very domain specific) and develop techniques for improved similarity assessment in social media sources such as microblogs. Through extensive empirical evaluations, we illustrate the improvements that we are able to
achieve over the state-of-the-art methods for the respective tasks.
Resumo:
O desenvolvimento de sistemas computacionais é um processo complexo, com múltiplas etapas, que requer uma análise profunda do problema, levando em consideração as limitações e os requisitos aplicáveis. Tal tarefa envolve a exploração de técnicas alternativas e de algoritmos computacionais para optimizar o sistema e satisfazer os requisitos estabelecidos. Neste contexto, uma das mais importantes etapas é a análise e implementação de algoritmos computacionais. Enormes avanços tecnológicos no âmbito das FPGAs (Field-Programmable Gate Arrays) tornaram possível o desenvolvimento de sistemas de engenharia extremamente complexos. Contudo, o número de transístores disponíveis por chip está a crescer mais rapidamente do que a capacidade que temos para desenvolver sistemas que tirem proveito desse crescimento. Esta limitação já bem conhecida, antes de se revelar com FPGAs, já se verificava com ASICs (Application-Specific Integrated Circuits) e tem vindo a aumentar continuamente. O desenvolvimento de sistemas com base em FPGAs de alta capacidade envolve uma grande variedade de ferramentas, incluindo métodos para a implementação eficiente de algoritmos computacionais. Esta tese pretende proporcionar uma contribuição nesta área, tirando partido da reutilização, do aumento do nível de abstracção e de especificações algorítmicas mais automatizadas e claras. Mais especificamente, é apresentado um estudo que foi levado a cabo no sentido de obter critérios relativos à implementação em hardware de algoritmos recursivos versus iterativos. Depois de serem apresentadas algumas das estratégias para implementar recursividade em hardware mais significativas, descreve-se, em pormenor, um conjunto de algoritmos para resolver problemas de pesquisa combinatória (considerados enquanto exemplos de aplicação). Versões recursivas e iterativas destes algoritmos foram implementados e testados em FPGA. Com base nos resultados obtidos, é feita uma cuidada análise comparativa. Novas ferramentas e técnicas de investigação que foram desenvolvidas no âmbito desta tese são também discutidas e demonstradas.
Resumo:
Desulfurization is one of the most important processes in the refining industry. Due to a growing concern about the risks to human health and environment, associated with the emissions of sulfur compounds, legislation has become more stringent, requiring a drastic reduction in the sulfur content of fuel to levels close to zero (< 10 ppm S). However, conventional desulfurization processes are inefficient and have high operating costs. This scenario stimulates the improvement of existing processes and the development of new and more efficient technologies. Aiming at overcoming these shortcomings, this work investigates an alternative desulfurization process using ionic liquids for the removal of mercaptans from "jet fuel" streams. The screening and selection of the most suitable ionic liquid were performed based on experimental and COSMO-RS predicted liquid-liquid equilibrium data. A model feed of 1-hexanethiol and n-dodecane was selected to represent a jet-fuel stream. High selectivities were determined, as a result of the low mutual solubility between the ionic liquid and the hydrocarbon matrix, proving the potential use of the ionic liquid, which prevents the loss of fuel for the solvent. The distribution ratios of mercaptans towards the ionic liquids were not as favorable, making the traditional liquid-liquid extraction processes not suitable for the removal of aliphatic S-compounds due to the high volume of extractant required. This work explores alternative methods and proposes the use of ionic liquids in a separation process assisted by membranes. In the process proposed the ionic liquid is used as extracting solvent of the sulfur species, in a hollow fiber membrane contactor, without co-extracting the other jet-fuel compound. In a second contactor, the ionic liquid is regenerated applying a sweep gas stripping, which allows for its reuse in a closed loop between the two membrane contactors. This integrated extraction/regeneration process of desulfurization produced a jet-fuel model with sulfur content lower than 2 ppm of S, as envisaged by legislation for the use of ultra-low sulfur jet-fuel. This result confirms the high potential for development of ultra-deep desulfurization application.
Resumo:
Ionic liquids are a class of solvents that, due to their unique properties, have been proposed in the past few years as alternatives to some hazardous volatile organic compounds. They are already used by industry, where it was possible to improve different processes by the incorporation of this kind of non-volatile and often liquid solvents. However, even if ionic liquids cannot contribute to air pollution, due to their negligible vapour pressures, they can be dispersed thorough aquatic streams thus contaminating the environment. Therefore, the main goals of this work are to study the mutual solubilities between water and different ionic liquids in order to infer on their environmental impact, and to propose effective methods to remove and, whenever possible, recover ionic liquids from aqueous media. The liquid-liquid phase behaviour of different ionic liquids and water was evaluated in the temperature range between (288.15 and 318.15) K. For higher melting temperature ionic liquids a narrower temperature range was studied. The gathered data allowed a deep understanding on the structural effects of the ionic liquid, namely the cation core, isomerism, symmetry, cation alkyl chain length and the anion nature through their mutual solubilities (saturation values) with water. The experimental data were also supported by the COnductor-like Screening MOdel for Real Solvents (COSMO-RS), and for some more specific systems, molecular dynamics simulations were also employed for a better comprehension of these systems at a molecular level. On the other hand, in order to remove and recover ionic liquids from aqueous solutions, two different methods were studied: one based on aqueous biphasic systems, that allowed an almost complete recovery of hydrophilic ionic liquids (those completely miscible with water at temperatures close to room temperature) by the addition of strong salting-out agents (Al2(SO4)3 or AlK(SO4)2); and the other based on the adsorption of several ionic liquids onto commercial activated carbon. The first approach, in addition to allowing the removal of ionic liquids from aqueous solutions, also makes possible to recover the ionic liquid and to recycle the remaining solution. In the adsorption process, only the removal of the ionic liquid from aqueous solutions was attempted. Nevertheless, a broad understanding of the structural effects of the ionic liquid on the adsorption process was attained, and a final improvement on the adsorption of hydrophilic ionic liquids by the addition of an inorganic salt (Na2SO4) was also achieved. Yet, the development of a recovery process that allows the reuse of the ionic liquid is still required for the development of sustainable processes.
Resumo:
Este trabalho focou-se no estudo do impacte das condições ambientais, de instalação e de utilização na degradação da fibra ótica, que frequentemente resultam na redução do desempenho das fibras óticas. Entre este fatores, foram estudados os efeitos de ambientes agressivos para o revestimento da fibra, nomeadamente no tempo de vida e resistência. Foi também estudado o efeito da propagação de sinais óticos de elevadas potências em curvaturas apertadas e a sua influência na degradação do desempenho da fibra ótica. Ainda neste âmbito, foi também estudado o desempenho de fibras óticas insensíveis a curvtura e fibras dopadas com Érbio, sendo analisada a dinâmica do efeito rastilho nestas fibras. Como parte integrante das redes óticas, os conetores óticos são de extrema importância na sua estrutura. O seu desempenho será refletido na qualidade do serviço da rede, e por isso é determinante estudar os fatores que contribuem para a sua degradação e mau funcionamento. Assim, este trabalho apresenta um estudo do comportamento de conetores óticos perante situações de mau manuseamento (como são limpeza insuficiente e degradação física da face final). Em adição foi também dado ênfase à reutilização de fibra danificada pelo efeito rastilho no desenvolvimento de sensores, passíveis de serem utilizados na monitorização de índice de refração, pressão hidrostática, tensão ou alta temperatura. Este procedimento surge como uma solução de baixo custo para o desenvolvimento de sensores em fibra ótica a partir de fibra danificada e inutilizável para as suas habituais aplicações em transmissão e/ou reflexão de sinais óticos.
Resumo:
More and more software projects today are security-related in one way or the other. Requirements engineers often fail to recognise indicators for security problems which is a major source of security problems in practice. Identifying security-relevant requirements is labour-intensive and errorprone. In order to facilitate the security requirements elicitation process, we present an approach supporting organisational learning on security requirements by establishing company-wide experience resources, and a socio-technical network to benefit from them. The approach is based on modelling the flow of requirements and related experiences. Based on those models, we enable people to exchange experiences about security-requirements while they write and discuss project requirements. At the same time, the approach enables participating stakeholders to learn while they write requirements. This can increase security awareness and facilitate learning on both individual and organisational levels. As a basis for our approach, we introduce heuristic assistant tools which support reuse of existing security-related experiences. In particular, they include Bayesian classifiers which issue a warning automatically when new requirements seem to be security-relevant. Our results indicate that this is feasible, in particular if the classifier is trained with domain specific data and documents from previous projects. We show how the ability to identify security-relevant requirements can be improved using this approach. We illustrate our approach by providing a step-by-step example of how we improved the security requirements engineering process at the European Telecommunications Standards Institute (ETSI) and report on experiences made in this application.