918 resultados para entity
Resumo:
Stereotactic body radiotherapy (SBRT) is now an established therapy in stage I lung cancer with comparable local control rates to surgical resection. Owing to the conformity of treatment dose delivery and with appropriate fractionation considerations, minimal side-effects to surrounding normal tissues are observed in most patients. SBRT is now being used in the treatment of oligometastatic disease, alone or alongside systemic therapy. At present there is a paucity of evidence available showing a clinical benefit, but several international studies are being set-up or have started recruitment. This overview considers the clinical entity of an oligometastatic state, discusses the role of SBRT in the management of oligometastatic disease and discusses potential novel therapy combinations with SBRT.
Resumo:
The signing of the Ulster Covenant on 28 September 1912 by almost 450,000 men and women was a powerful act of defiance on the part of Unionists in the context of what they perceived as the threat to their way of life represented by the Liberal Government's policy of Irish Home Rule. This article attempts to look beyond the well-studied leadership figures of Carson and Craig in order to fashion insights into the way Ulster Protestant society was mobilised around the Covenant and opposition to Home Rule. It draws attention to hitherto over-shadowed personalities who can be said to have exerted crucial local influence. It also contends that although pan-Protestant denominational unity provided the basis for the success of the Covenant, the Presbyterian community was particularly cohesive and purposeful in the campaign. The article further argues that the risk-taking defiance that came more easily to the Presbyterians, on account of a troubled history, largely evaporated in the new political circumstances of Northern Ireland when it became a separate devolved political entity within the UK from 1921.
Resumo:
Malignant initiation, leukaemic transformation, and disease progression in haematological malignancies involves a series of mutational events in genes involved in normal housekeeping functions of the cell. These acquired genetic changes can lead to either increased proliferation or a decreased rate of apoptosis, thus allowing expansion of the malignant clone. Although leukaemia can arise as a de novo disease, it has become increasingly clear that therapies, including the use of irradiation and/or chemotherapy, can give rise to malignancy. Therapy-associated myelodysplasia (t-MDS) and therapy-associated acute myeloid leukaemia (t-AML) account for 10-20% of new cases of these diseases. Although these secondary malignancies have been recognised as a clinical entity for nearly 30 years, molecular studies are now pinpointing various regions of the genome that are susceptible to DNA damage by these chemotherapeutic/radiotherapeutic strategies. The detection of new malignancies (both solid tumours and haematological tumours) following allogeneic bone marrow transplantation (BMT) is also providing us with some clues to the nature of leukaemogenesis, particularly with the observation that leukaemia can occur in donor cells postallogeneic BMT.
Resumo:
The New towns initiative in the UK and Northern Ireland, enshrined in the Act of 1946, was derived out of a stream of philosophical thought that was a reaction to modernity, paritcularly Victorian industrialisation. This was developed through the writings of Ruskin and Morris and crystalised by Ebenezer Howard in his book Garden Cities of Tomorrow, which culminated with the design of Letchworth by Parker and Unwin (completed 1914). Letchworth however, was a more than just a physical and spatial entity: it was actually a policyscape, a novel economic and social policy landscape that regulated development in a modern and scientific way.
These themes of the scientification of urban design, and the regulation of urban development through policy, run through the whole New Town movement, right up to the development of the eco-towns of today. New Towns, in fact, can be seen as an embodiment of modernity, as well as a reaction to it .
Resumo:
This theoretical paper attempts to define some of the key components and challenges required to create embodied conversational agents that can be genuinely interesting conversational partners. Wittgenstein’s argument concerning talking lions emphasizes the importance of having a shared common ground as a basis for conversational interactions. Virtual bats suggests that–for some people at least–it is important that there be a feeling of authenticity concerning a subjectively experiencing entity that can convey what it is like to be that entity. Electric sheep reminds us of the importance of empathy in human conversational interaction and that we should provide a full communicative repertoire of both verbal and non-verbal components if we are to create genuinely engaging interactions. Also we may be making the task more difficult rather than easy if we leave out non-verbal aspects of communication. Finally, analogical peacocks highlights the importance of between minds alignment and establishes a longer term goal of being interesting, creative, and humorous if an embodied conversational is to be truly an engaging conversational partner. Some potential directions and solutions to addressing these issues are suggested.
Resumo:
Background
It has been argued that though correlated with mental health, mental well-being is a distinct entity. Despite the wealth of literature on mental health, less is known about mental well-being. Mental health is something experienced by individuals, whereas mental well-being can be assessed at the population level. Accordingly it is important to differentiate the individual and population level factors (environmental and social) that could be associated with mental health and well-being, and as people living in deprived areas have a higher prevalence of poor mental health, these relationships should be compared across different levels of neighbourhood deprivation.
Methods
A cross-sectional representative random sample of 1,209 adults from 62 Super Output Areas (SOAs) in Belfast, Northern Ireland (Feb 2010 – Jan 2011) were recruited in the PARC Study. Interview-administered questionnaires recorded data on socio-demographic characteristics, health-related behaviours, individual social capital, self-rated health, mental health (SF-8) and mental well-being (WEMWBS). Multi-variable linear regression analyses, with inclusion of clustering by SOAs, were used to explore the associations between individual and perceived community characteristics and mental health and mental well-being, and to investigate how these associations differed by the level of neighbourhood deprivation.
Results
Thirty-eight and 30 % of variability in the measures of mental well-being and mental health, respectively, could be explained by individual factors and the perceived community characteristics. In the total sample and stratified by neighbourhood deprivation, age, marital status and self-rated health were associated with both mental health and well-being, with the ‘social connections’ and local area satisfaction elements of social capital also emerging as explanatory variables. An increase of +1 in EQ-5D-3 L was associated with +1SD of the population mean in both mental health and well-being. Similarly, a change from ‘very dissatisfied’ to ‘very satisfied’ for local area satisfaction would result in +8.75 for mental well-being, but only in the more affluent of areas.
Conclusions
Self-rated health was associated with both mental health and mental well-being. Of the individual social capital explanatory variables, ‘social connections’ was more important for mental well-being. Although similarities in the explanatory variables of mental health and mental well-being exist, socio-ecological interventions designed to improve them may not have equivalent impacts in rich and poor neighbourhoods.
Resumo:
Master data management (MDM) integrates data from multiple
structured data sources and builds a consolidated 360-
degree view of business entities such as customers and products.
Today’s MDM systems are not prepared to integrate
information from unstructured data sources, such as news
reports, emails, call-center transcripts, and chat logs. However,
those unstructured data sources may contain valuable
information about the same entities known to MDM from
the structured data sources. Integrating information from
unstructured data into MDM is challenging as textual references
to existing MDM entities are often incomplete and
imprecise and the additional entity information extracted
from text should not impact the trustworthiness of MDM
data.
In this paper, we present an architecture for making MDM
text-aware and showcase its implementation as IBM InfoSphere
MDM Extension for Unstructured Text Correlation,
an add-on to IBM InfoSphere Master Data Management
Standard Edition. We highlight how MDM benefits from
additional evidence found in documents when doing entity
resolution and relationship discovery. We experimentally
demonstrate the feasibility of integrating information from
unstructured data sources into MDM.
Resumo:
In 1862, Glasgow Corporation initiated the first of a series of three legislative acts which would become known collectively as the City Improvements Acts. Despite having some influence on the nature of the built fabric on the expanding city as a whole, the most extensive consequences of these acts was reserved for one specific area of the city, the remnants of the medieval Old Town. As the city had expanded towards all points of the compass in a regular, grid-iron structure throughout the nineteenth century, the Old Town remained singularly as a densely wrought fabric of medieval wynds, vennels, oblique passageways and accelerated tenementalisation. Here, as the rest of the city began to assume the form of an ordered entity, visible and classifiable, one could still find and addresses such as ‘Bridgegate, No. 29, backland, stair first left, three up, right lobby, door facing’ (quoted in Pacione, 1995).
Unsurprisingly, this place, where proximity to the midden (dung-heap) was considered an enviable position, was seen by the authorities as a major health hazard and a source not only of cholera, but also of the more alarming typhoid epidemic of 1842. Accordingly, the demolitions which occurred in the backlands of the Old Town under the first of the acts, the Glasgow Police Act of 1862, were justified on health and medical grounds. But disease was not the only social problem thought to issue from this district. Reports from social reformers including Fredrick Engels suggested that the decay of the area’s physical fabric could be extended to the moral profile of its inhabitants. This was in such a state of degeneracy that there were calls for a nearby military barracks to be relocated to more salubrious climes because troops were routinely coming into contact ‘with the most dissolute and profligate portion of the population’ (Peter Clonston, Lord Provost, June 1861). Perhaps more worrying for the city fathers, however, was that the barracks’ arsenal was seen as a potential source of arms for the militant and often illegal cotton workers’ unions and organisations who inhabited the Old Town as well as the districts to the east. In fact, the Old Town and East End had been the site of numerous working class actions and riots since 1787, including a strike of 60,000 workers in 1820, 100,000 in 1838, and the so-called Bread Riots of 1848 where shouts of ‘Vive La Revolution’ were reported in the Gallowgate.
The events in Paris in 1848 precipitated Baron Hausmann’s interventions into that city. The boulevards were in turn visited by members of Glasgow Corporation and ultimately, it can be argued, provided an example for Old Town Glasgow. This paper suggests that the city improvement acts carried a similarly complex and pervasive agenda, one which embodied not only health, class conflict and sexual morality but also the more local condition of sectarianism. And, like in Paris, these were played out spatially in a extensive reconfiguration of the urban fabric of the Old Town which, through the creation of new streets and a railway yard, not only made it more amenable to large scale military manoeuvres but also, opened up the area to capitalist accumulation. By the end of the works, the medieval heritage of the Old Town had been almost completely razed, the working class and Catholic East End had, through the insertion of the railway yard, been isolated from the city centre and approximately 70,000 people had been made homeless.
Resumo:
A core activity in information systems development involves building a conceptual model of the domain that an information system is intended to support. Such models are created using a conceptual-modeling (CM) grammar. Just as high-quality conceptual models facilitate high-quality systems development, high-quality CM grammars facilitate high-quality conceptual modeling. This paper provides a new perspective on ways to improve the quality of the semantics of CM grammars. For many years, the leading approach to this topic has relied on ontological theory. We show, however, that the ontological approach captures only half the story. It needs to be coupled with a logical approach. We explain how the ontological quality and logical quality of CM grammars interrelate. Furthermore, we outline three contributions that a logical approach can make to evaluating the quality of CM grammars: a means of seeing some familiar conceptual-modeling problems in simpler ways; the illumination of new problems; and the ability to prove the benefit of modifying existing CM grammars in particular ways. We demonstrate these benefits in the context of the Entity-Relationship grammar. More generally, our paper opens up a new area of research with many opportunities for future research and practice.
Resumo:
We propose and advocate basic principles for the fusion of incomplete or uncertain information items, that should apply regardless of the formalism adopted for representing pieces of information coming from several sources. This formalism can be based on sets, logic, partial orders, possibility theory, belief functions or imprecise probabilities. We propose a general notion of information item representing incomplete or uncertain information about the values of an entity of interest. It is supposed to rank such values in terms of relative plausibility, and explicitly point out impossible values. Basic issues affecting the results of the fusion process, such as relative information content and consistency of information items, as well as their mutual consistency, are discussed. For each representation setting, we present fusion rules that obey our principles, and compare them to postulates specific to the representation proposed in the past. In the crudest (Boolean) representation setting (using a set of possible values), we show that the understanding of the set in terms of most plausible values, or in terms of non-impossible ones matters for choosing a relevant fusion rule. Especially, in the latter case our principles justify the method of maximal consistent subsets, while the former is related to the fusion of logical bases. Then we consider several formal settings for incomplete or uncertain information items, where our postulates are instantiated: plausibility orderings, qualitative and quantitative possibility distributions, belief functions and convex sets of probabilities. The aim of this paper is to provide a unified picture of fusion rules across various uncertainty representation settings.
Resumo:
The electronic storage of medical patient data is becoming a daily experience in most of the practices and hospitals worldwide. However, much of the data available is in free-form text, a convenient way of expressing concepts and events, but especially challenging if one wants to perform automatic searches, summarization or statistical analysis. Information Extraction can relieve some of these problems by offering a semantically informed interpretation and abstraction of the texts. MedInX, the Medical Information eXtraction system presented in this document, is the first information extraction system developed to process textual clinical discharge records written in Portuguese. The main goal of the system is to improve access to the information locked up in unstructured text, and, consequently, the efficiency of the health care process, by allowing faster and reliable access to quality information on health, for both patient and health professionals. MedInX components are based on Natural Language Processing principles, and provide several mechanisms to read, process and utilize external resources, such as terminologies and ontologies, in the process of automatic mapping of free text reports onto a structured representation. However, the flexible and scalable architecture of the system, also allowed its application to the task of Named Entity Recognition on a shared evaluation contest focused on Portuguese general domain free-form texts. The evaluation of the system on a set of authentic hospital discharge letters indicates that the system performs with 95% F-measure, on the task of entity recognition, and 95% precision on the task of relation extraction. Example applications, demonstrating the use of MedInX capabilities in real applications in the hospital setting, are also presented in this document. These applications were designed to answer common clinical problems related with the automatic coding of diagnoses and other health-related conditions described in the documents, according to the international classification systems ICD-9-CM and ICF. The automatic review of the content and completeness of the documents is an example of another developed application, denominated MedInX Clinical Audit system.
Resumo:
A integração de serviços na perspetiva dos cidadãos e empresas e a necessidade de garantir algumas características da Administração Pública como a versatilidade e a competitividade colocam alguns constrangimentos na conceção das arquiteturas de integração de serviços. Para que seja possível integrar serviços de forma a que se garanta a mutabilidade da Administração Pública, é necessário criar dinamicamente workflows. No entanto, a criação de dinâmica de workflows suscita algumas preocupações ao nível da segurança, nomeadamente em relação à privacidade dos resultados produzidos durante a execução de um workflow e em relação à aplicação de políticas de controlo de participação no workflow pelos diversos executores do mesmo. Neste trabalho apresentamos um conjunto de princípios e regras (arquitetura) que permitem a criação e execução de workflows dinâmicos resolvendo, através de um modelo de segurança, as questões referidas. A arquitetura utiliza a composição de serviços para dessa forma construir serviços complexos a que poderá estar inerente um workflow dinâmico. A arquitetura usa ainda um paradigma de troca de mensagens-padrão entre os prestadores de serviços envolvidos num workflow dinâmico. O modelo de segurança proposto está intimamente ligado ao conjunto de mensagens definido na arquitetura. No âmbito do trabalho foram identificadas e analisadas várias arquiteturas e/ou plataformas de integração de serviços. A análise realizada teve como objetivo identificar as arquiteturas que permitem a criação de workflows dinâmicos e, destas, aquelas que utilizam mecanismos de privacidade para os resultados e de controlo de participação dos executores desses workflows. A arquitetura de integração que apresentamos é versátil, escalável, permite a prestação concorrente de serviços entre prestadores de serviços e permite criar workflows dinâmicos. A arquitetura permite que as entidades executoras do workflow decidam sobre a sua participação, decidam sobre a participação de terceiros (a quem delegam serviços) e decidam a quem entregam os resultados. Os participantes são acreditados por entidades certificadores reconhecidas pelos demais participantes. As credenciais fornecidas pelas entidades certificadoras são o ponto de partida para a aplicação de políticas de segurança no âmbito da arquitetura. Para validar a arquitetura proposta foram identificados vários casos de uso que exemplificam a necessidade de construção de workflows dinâmicos para atender a serviços complexos (não prestados na íntegra por uma única entidade). Estes casos de uso foram implementados num protótipo da arquitetura desenvolvido para o efeito. Essa experimentação permitiu concluir que a arquitetura está adequada para prestar esses serviços usando workflows dinâmicos e que na execução desses workflows os executores dispõem dos mecanismos de segurança adequados para controlar a sua participação, a participação de terceiros e a privacidade dos resultados produzidos no âmbito dos mesmos.
Resumo:
In the modern society, new devices, applications and technologies, with sophisticated capabilities, are converging in the same network infrastructure. Users are also increasingly demanding in personal preferences and expectations, desiring Internet connectivity anytime and everywhere. These aspects have triggered many research efforts, since the current Internet is reaching a breaking point trying to provide enough flexibility for users and profits for operators, while dealing with the complex requirements raised by the recent evolution. Fully aligned with the future Internet research, many solutions have been proposed to enhance the current Internet-based architectures and protocols, in order to become context-aware, that is, to be dynamically adapted to the change of the information characterizing any network entity. In this sense, the presented Thesis proposes a new architecture that allows to create several networks with different characteristics according to their context, on the top of a single Wireless Mesh Network (WMN), which infrastructure and protocols are very flexible and self-adaptable. More specifically, this Thesis models the context of users, which can span from their security, cost and mobility preferences, devices’ capabilities or services’ quality requirements, in order to turn a WMN into a set of logical networks. Each logical network is configured to meet a set of user context needs (for instance, support of high mobility and low security). To implement this user-centric architecture, this Thesis uses the network virtualization, which has often been advocated as a mean to deploy independent network architectures and services towards the future Internet, while allowing a dynamic resource management. This way, network virtualization can allow a flexible and programmable configuration of a WMN, in order to be shared by multiple logical networks (or virtual networks - VNs). Moreover, the high level of isolation introduced by network virtualization can be used to differentiate the protocols and mechanisms of each context-aware VN. This architecture raises several challenges to control and manage the VNs on-demand, in response to user and WMN dynamics. In this context, we target the mechanisms to: (i) discover and select the VN to assign to an user; (ii) create, adapt and remove the VN topologies and routes. We also explore how the rate of variation of the user context requirements can be considered to improve the performance and reduce the complexity of the VN control and management. Finally, due to the scalability limitations of centralized control solutions, we propose a mechanism to distribute the control functionalities along the architectural entities, which can cooperate to control and manage the VNs in a distributed way.
Resumo:
The rapid evolution and proliferation of a world-wide computerized network, the Internet, resulted in an overwhelming and constantly growing amount of publicly available data and information, a fact that was also verified in biomedicine. However, the lack of structure of textual data inhibits its direct processing by computational solutions. Information extraction is the task of text mining that intends to automatically collect information from unstructured text data sources. The goal of the work described in this thesis was to build innovative solutions for biomedical information extraction from scientific literature, through the development of simple software artifacts for developers and biocurators, delivering more accurate, usable and faster results. We started by tackling named entity recognition - a crucial initial task - with the development of Gimli, a machine-learning-based solution that follows an incremental approach to optimize extracted linguistic characteristics for each concept type. Afterwards, Totum was built to harmonize concept names provided by heterogeneous systems, delivering a robust solution with improved performance results. Such approach takes advantage of heterogenous corpora to deliver cross-corpus harmonization that is not constrained to specific characteristics. Since previous solutions do not provide links to knowledge bases, Neji was built to streamline the development of complex and custom solutions for biomedical concept name recognition and normalization. This was achieved through a modular and flexible framework focused on speed and performance, integrating a large amount of processing modules optimized for the biomedical domain. To offer on-demand heterogenous biomedical concept identification, we developed BeCAS, a web application, service and widget. We also tackled relation mining by developing TrigNER, a machine-learning-based solution for biomedical event trigger recognition, which applies an automatic algorithm to obtain the best linguistic features and model parameters for each event type. Finally, in order to assist biocurators, Egas was developed to support rapid, interactive and real-time collaborative curation of biomedical documents, through manual and automatic in-line annotation of concepts and relations. Overall, the research work presented in this thesis contributed to a more accurate update of current biomedical knowledge bases, towards improved hypothesis generation and knowledge discovery.
Resumo:
Nesta tese, realizada no âmbito do Programa Doutoral em Química da Universidade de Aveiro, foram desenvolvidas duas famílias de receptores sintéticos: macrocíclicos baseados na plataforma tetraazacalix[2]areno[2]triazina; e acíclicos construídos a partir de diaminas simples. A plataforma macrocíclica foi decorada nos átomos de azoto em ponte com unidades de reconhecimento molecular contendo fragmentos com grupos amida para o reconhecimento de aniões ou com grupos ácidos carboxílicos para a coordenação de metais de transição. Os receptores acíclicos foram obtidos por acoplamento de diaminas (etilenodiamina, orto-fenilenodiamina ou 2-aminobenzilamina) com uma unidade lipofílica incorporando um anel heterocíclico (derivados de oxadiazole ou furano) e com um derivado isocianato. Estas moléculas assimétricas com um grupo amida e um grupo ureia como unidades de reconhecimento molecular foram avaliadas como receptores e transportadores transmembranares de aniões biologicamente relevantes (Cl- e HCO3-). Os resultados experimentais obtidos serão descritos ao longo de três capítulos, após um primeiro capítulo bibliográfico. No Capítulo 1 começa-se por fazer uma revisão bibliográfica sucinta sobre o desenvolvimento recente de receptores funcionais baseados em azacalixarenos bem como das suas aplicações, designadamente no reconhecimento molecular. Numa segunda parte apresenta-se uma revisão sucinta de receptores derivados de (tio)ureias, relacionados com os receptores sintetizados no âmbito desta tese e com propriedades de reconhecimento e transporte transmembranar de aniões. No Capítulo 2 reporta-se uma série de macrociclos novos com os átomos de azoto em ponte de tetraazacalix[2]areno[2]triazina funcionalizados com bromoacetato de metilo. Foram preparados três novos macrociclos com quatro grupos éster, como braços pendentes, a partir de percursores tetraazacalix[2]areno[2]triazina com os anéis de triazina substituídos com cloro, metilamina ou hexilamina. Os grupos acetato foram hidrolisados em condições básicas, tendo cada um dos derivados dialquilamina originado um composto com quatro grupo carboxílicos, enquanto o análogo diclorado originou uma mistura de compostos com dois grupos carboxílico e com os átomos de cloro substituídos por grupos hidroxilo. Subsequentemente, as propriedades de coordenação dos derivados alquilamina para cobre(II) foram avaliadas por espectroscopia de UV-Vis, tendo-se obtido constantes de estabilidades semelhantes (logk ≈ 6,7). No Capítulo 3 descrevem-se três macrociclos obtidos através da funcionalização dos átomos de azoto em ponte de tetraazacalix[2]areno[2]triazina com grupos amida derivados de N-Boc-etilenodiamina, benzilamina e (S)-metilbenzilamina. A afinidade destes receptores para a série de aniões carboxilato (oxalato, malonato, succinato, glutarato, diglicolato, pimelato, suberato, fumarato, maleato, ftalato e isoftalato) e inorgânicos (Cl-, H2PO4- e SO42-) por titulação de RMN de 1H, foi avaliada. Estes macrociclos conjuntamente com os descritos no Capítulo 2 são os primeiros exemplos reportados na literatura de receptores sintéticos baseados na plataforma de tetraazacalix[2]areno[2]triazina com grupos funcionais nos azotos em ponte. O receptor derivado de N-Boc-etilenodiamina, com oito grupos N-H, entre os três receptores, é o que apresenta maior afinidade para os aniões estudados. No Capítulo 4 é descrita a síntese 59 compostos acíclicos (vide supra) obtidos em três passos de síntese com bons rendimentos. No design desta biblioteca de moléculas a afinidade para aniões dos grupos ureia foi modelada pela inserção de diferentes substituintes arilo ou alquilo, com propriedades electrónicas distintas. A introdução destes grupos em conjugação com um anel de oxadiazole ou furano permitiu também modelar a lipofília destes compostos. A afinidade destes receptores para aniões cloreto e bicarbonato, e em alguns casos para fumarato e maleato, foi investigada por titulação de RMN de 1H. Estes compostos apresentaram constantes de associações compatíveis com o transporte transmembranar de cloreto. Por outro lado estes receptores apresentaram afinidades elevadas para fumarato e maleato, com seletividade para este último. São também discutidos os resultados dos ensaios de transporte de cloreto por estes receptores através de vesículas de em POPC. No Capítulo 5 encontram-se as conclusões gerais desta tese de Doutoramento. No Capitulo 6 encontram-se os dados espectroscópicos e os restantes detalhes experimentais para todos os compostos sintetizados.