892 resultados para Database, Image Retrieval, Browsing, Semantic Concept


Relevância:

20.00% 20.00%

Publicador:

Resumo:

RESUMO: Apesar de toda a evolução farmacológica e de meios complementares de diagnóstico possível nos últimos anos, o enfarte agudo do miocárdio e a morte súbita continuam a ser a primeira manifestação da aterosclerose coronária para muitos doentes, que estavam previamente assintomáticos. Os exames complementares de diagnóstico tradicionalmente usados para avaliar a presença de doença coronária, baseiam‐se na documentação de isquémia do miocárdio e por este motivo a sua positividade depende da presença de lesões coronárias obstrutivas. As lesões coronárias não obstrutivas estão também frequentemente implicadas no desenvolvimento de eventos coronários. Apesar de o risco absoluto de instabilização por placa ser superior para as lesões mais volumosas e obstrutivas, estas são menos prevalentes do que as placas não obstrutivas e assim, por questões probabilísticas, os eventos coronários resultam com frequência da rotura ou erosão destas últimas. Estudos recentes de imagiologia intracoronária avançada forneceram evidência de que apesar de ser possível identificar algumas características de vulnerabilidade em placas associadas ao desenvolvimento subsequente de eventos coronários, a sua sensibilidade e especificidade é muito baixa para aplicação clínica. Mais do que o risco associado a uma placa em particular, para o doente poderá ser mais importante o risco global da sua árvore coronária reflexo da soma das probabilidade de todas as suas lesões, sendo que quanto maior for a carga aterosclerótica maior será o seu risco. A angio TC cardíaca é a mais recente técnica de imagem não invasiva para o estudo da doença coronária e surgiu nos últimos anos fruto de importantes avanços na tecnologia de TC multidetectores. Estes avanços, permitiram uma progressiva melhoria da resolução espacial e temporal, contribuindo para a melhoria da qualidade dos exames, bem como uma significativa redução da dose de radiação. A par desta evolução tecnológica, foi aumentando a experiência e gerada mais evidência científica, tornando a angio TC cardíaca cada vez mais robusta na avaliação da doença coronária e aumentando a sua aplicabilidade clínica. Mais recentemente apareceram vários trabalhos que validaram o seu valor prognóstico, assinalando a sua chegada à idade adulta. Para além de permitir excluir a presença de doença coronária e de identificar a presença de estenoses significativas, a angio TC cardíaca permite identificar a presença de lesões coronárias não obstrutivas, característica impar desta técnica como modalidade de imagem não invasiva. Ao permitir identificar a totalidade das lesões ateroscleróticas (obstrutivas e não obstrutivas), a 18 angio TC cardíaca poderá fornecer uma quantificação da carga aterosclerótica coronária total, podendo essa identificação ser útil na estratificação dos indivíduos em risco de eventos coronários. Neste trabalho foi possível identificar preditores demográficos e clínicos de uma elevada carga aterosclerótica coronária documentada pela angioTC cardíaca, embora o seu poder discriminativo tenha sido relativamente modesto, mesmo quando agrupados em scores clínicos. Entre os vários scores, o desempenho foi um pouco melhor para o score de risco cardiovascular Heartscore. Estas limitações espelham a dificuldade de prever apenas com base em variáveis clínicas, mesmo quando agrupadas em scores, a presença e extensão da doença coronária. Um dos factores de risco clássicos, a obesidade, parece ter uma relação paradoxal com a carga aterosclerótica, o que pode justificar algumas limitações da estimativa com base em scores clínicos. A diabetes mellitus, por outro lado, foi um dos preditores clínicos mais importantes, funcionando como modelo de doença coronária mais avançada, útil para avaliar o desempenho dos diferentes índices de carga aterosclerótica. Dada a elevada prevalência de placas ateroscleróticas identificáveis por angio TC na árvore coronária, torna-‐se importante desenvolver ferramentas que permitam quantificar a carga aterosclerótica e assim identificar os indivíduos que poderão eventualmente beneficiar de medidas de prevenção mais intensivas. Com este objectivo, foi desenvolvido um índice de carga aterosclerótica que reúne a informação global acerca da localização, do grau de estenose e do tipo de placa, obtida pela angio TC cardíaca, o CT--‐LeSc. Este score poderá vir a ser uma ferramenta útil para quantificação da carga aterosclerótica coronária, sendo de esperar que possa traduzir a informação prognóstica da angio TC cardíaca. Por fim, o conceito de árvore coronária vulnerável poderá ser mais importante do que o da placa vulnerável e a sua identificação pela angio TC cardíaca poderá ser importante numa estratégia de prevenção mais avançada. Esta poderá permitir personalizar as medidas de prevenção primária, doseando melhor a sua intensidade em função da carga aterosclerótica, podendo esta vir a constituir uma das mais importantes indicações da angio TC cardíaca no futuro.---------------- ABSTRACT Despite the significant advances made possible in recent years in the field of pharmacology and diagnostic tests, acute yocardial infarction and sudden cardiac death remain the first manifestation of coronary atherosclerosis in a significant proportion of patients, as many were previously asymptomatic. Traditionally, the diagnostic exams employed for the evaluation of possible coronary artery disease are based on the documentation of myocardial ischemia and, in this way, they are linked to the presence of obstructive coronary stenosis. Nonobstructive coronary lesions are also frequently involved in the development of coronary events. Although the absolute risk of becoming unstable per plaque is higher for more obstructive and higher burden plaques, these are much less frequent than nonobstructive lesions and therefore, in terms of probability for the patient, coronary events are often the result of rupture or erosion of the latter ones. Recent advanced intracoronary imaging studies provided evidence that although it is possible to identify some features of vulnerability in plaques associated with subsequente development of coronary events, the sensitivity and sensibility are very limited for clinical application. More important than the individual risk associated with a certain plaque, for the patient it might be more important the global risk of the total coronary tree, as reflected by the sum of the diferent probabilities of all the lesions, since the higher the coronary Atherosclerotic burden, the higher the risk for the patient. Cardiac CT or Coronary CT angiography is still a young modality. It is the most recente noninvasive imaging modality in the study of coronary artery disease and its development was possible due to important advances in multidetector CT technology. These allowed significant improvements in temporal and spatial resolution, leading to better image quality and also some impressive reductions in radiation dose. At the same time, the increasing experience with this technique lead to a growing body of scientific evidence, making cardiac CT a robust imaging tool for the evaluation of coronary artery disease and increased its clinical indications. More recently, several publications documented its prognostic value, marking the transition of cardiac CT to adulthood. Besides being able to exclude the presence of coronary artery disease and of obstructive lesions, Cardiac CT allows also the identification of nonobstructive lesions, making this a unique tool in the field of noninvasive imaging modalities. By evaluating both obstructive and nonobstructive lesions, cardiac CT can provide for the quantification of total coronary atherosclerotic burden, and this can be useful to stratify the risk of future coronary events. In the present work, it was possible to identify significant demographic and clinical predictors of a high coronary atherosclerotic burden as assessed by cardiac CT, but with modest odds ratios, even when the individual variables were gathered in clinical scores. Among these diferent clinical scores, the performance was better for the Heartscore, a cardiovascular risk score. This modest performance underline the limitations on predicting the presence and severity of coronary disease based only on clinical variables, even when optimized together in risk scores, One of the classical risk factors, obesity, had in fact a paradoxical relation with coronary atherosclerotic burden and might explain some of the limitations of the clinical models. On the opposite, diabetes mellitus was one of the strongest clinical predictors, and was considered to be a model of more advanced coronary disease, useful to evaluate the performance of diferent plaque burden scores. In face of the high prevalence of plaques that can be identified in the coronary tree of patients undergoing cardiac CT, it is of utmost importance to develop tools to quantify the total coronary atherosclerotic burden providing the identification of patients that could eventually benefit from more intensive preventive measures. This was the rational for the development of a coronary atherosclerotic burden score, reflecting the comprehensive information on localization, degree of stenosis and plaque composition provided by cardiac CT – the CT-LeSc. This score may become a useful tool to quantify total coronary atherosclerotic burden and is expected to convey the strong prognostic information of cardiac CT. Lastly, the concept of vulnerable coronary tree might become more important than the concept of the vulnerable plaque and his assessment by cardiac CT Might become important in a more advance primary prevention strategy. This Could lead to a more custom-made primary prevention, tailoring the intensity of preventive measures to the atherosclerotic burden and this might become one of the most important indications of cardiac CT In the near future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing use of information and communication technologies (ICT) in diverse professional and personal contexts calls for new knowledge, and a set of abilities, competences and attitudes, for an active and participative citizenship. In this context it is acknowledged that universities have an important role innovating in the educational use of digital media to promote an inclusive digital literacy. The educational potential of digital technologies and resources has been recognized by both researchers and practitioners. Multiple pedagogical models and research approaches have already contributed to put in evidence the importance of adapting instructional and learning practices and processes to concrete contexts and educational goals. Still, academic and scientific communities believe further investments in ICT research is needed in higher education. This study focuses on educational models that may contribute to support digital technology uses, where these can have cognitive and educational relevance when compared to analogical technologies. A teaching and learning model, centered in the active role of the students in the exploration, production, presentation and discussion of interactive multimedia materials, was developed and applied using the internet and exploring emergent semantic hypermedia formats. The research approach focused on the definition of design principles for developing class activities that were applied in three different iterations in undergraduate courses from two institutions, namely the University of Texas at Austin, USA and the University of Lisbon, Portugal. The analysis of this study made possible to evaluate the potential and efficacy of the model proposed and the authoring tool chosen in the support of metacognitive skills and attitudes related to information structuring and management, storytelling and communication, using computers and the internet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The extraction of relevant terms from texts is an extensively researched task in Text- Mining. Relevant terms have been applied in areas such as Information Retrieval or document clustering and classification. However, relevance has a rather fuzzy nature since the classification of some terms as relevant or not relevant is not consensual. For instance, while words such as "president" and "republic" are generally considered relevant by human evaluators, and words like "the" and "or" are not, terms such as "read" and "finish" gather no consensus about their semantic and informativeness. Concepts, on the other hand, have a less fuzzy nature. Therefore, instead of deciding on the relevance of a term during the extraction phase, as most extractors do, I propose to first extract, from texts, what I have called generic concepts (all concepts) and postpone the decision about relevance for downstream applications, accordingly to their needs. For instance, a keyword extractor may assume that the most relevant keywords are the most frequent concepts on the documents. Moreover, most statistical extractors are incapable of extracting single-word and multi-word expressions using the same methodology. These factors led to the development of the ConceptExtractor, a statistical and language-independent methodology which is explained in Part I of this thesis. In Part II, I will show that the automatic extraction of concepts has great applicability. For instance, for the extraction of keywords from documents, using the Tf-Idf metric only on concepts yields better results than using Tf-Idf without concepts, specially for multi-words. In addition, since concepts can be semantically related to other concepts, this allows us to build implicit document descriptors. These applications led to published work. Finally, I will present some work that, although not published yet, is briefly discussed in this document.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ontologies formalized by means of Description Logics (DLs) and rules in the form of Logic Programs (LPs) are two prominent formalisms in the field of Knowledge Representation and Reasoning. While DLs adhere to the OpenWorld Assumption and are suited for taxonomic reasoning, LPs implement reasoning under the Closed World Assumption, so that default knowledge can be expressed. However, for many applications it is useful to have a means that allows reasoning over an open domain and expressing rules with exceptions at the same time. Hybrid MKNF knowledge bases make such a means available by formalizing DLs and LPs in a common logic, the Logic of Minimal Knowledge and Negation as Failure (MKNF). Since rules and ontologies are used in open environments such as the Semantic Web, inconsistencies cannot always be avoided. This poses a problem due to the Principle of Explosion, which holds in classical logics. Paraconsistent Logics offer a solution to this issue by assigning meaningful models even to contradictory sets of formulas. Consequently, paraconsistent semantics for DLs and LPs have been investigated intensively. Our goal is to apply the paraconsistent approach to the combination of DLs and LPs in hybrid MKNF knowledge bases. In this thesis, a new six-valued semantics for hybrid MKNF knowledge bases is introduced, extending the three-valued approach by Knorr et al., which is based on the wellfounded semantics for logic programs. Additionally, a procedural way of computing paraconsistent well-founded models for hybrid MKNF knowledge bases by means of an alternating fixpoint construction is presented and it is proven that the algorithm is sound and complete w.r.t. the model-theoretic characterization of the semantics. Moreover, it is shown that the new semantics is faithful w.r.t. well-studied paraconsistent semantics for DLs and LPs, respectively, and maintains the efficiency of the approach it extends.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sign language is the form of communication used by Deaf people, which, in most cases have been learned since childhood. The problem arises when a non-Deaf tries to contact with a Deaf. For example, when non-Deaf parents try to communicate with their Deaf child. In most cases, this situation tends to happen when the parents did not have time to properly learn sign language. This dissertation proposes the teaching of sign language through the usage of serious games. Currently, similar solutions to this proposal do exist, however, those solutions are scarce and limited. For this reason, the proposed solution is composed of a natural user interface that is intended to create a new concept on this field. The validation of this work, consisted on the implementation of a serious game prototype, which can be used as a source for learning (Portuguese) sign language. On this validation, it was first implemented a module responsible for recognizing sign language. This first stage, allowed the increase of interaction and the construction of an algorithm capable of accurately recognizing sign language. On a second stage of the validation, the proposal was studied so that the pros and cons can be determined and considered on future works.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the invention of photography humans have been using images to capture, store and analyse the act that they are interested in. With the developments in this field, assisted by better computers, it is possible to use image processing technology as an accurate method of analysis and measurement. Image processing's principal qualities are flexibility, adaptability and the ability to easily and quickly process a large amount of information. Successful examples of applications can be seen in several areas of human life, such as biomedical, industry, surveillance, military and mapping. This is so true that there are several Nobel prizes related to imaging. The accurate measurement of deformations, displacements, strain fields and surface defects are challenging in many material tests in Civil Engineering because traditionally these measurements require complex and expensive equipment, plus time consuming calibration. Image processing can be an inexpensive and effective tool for load displacement measurements. Using an adequate image acquisition system and taking advantage of the computation power of modern computers it is possible to accurately measure very small displacements with high precision. On the market there are already several commercial software packages. However they are commercialized at high cost. In this work block-matching algorithms will be used in order to compare the results from image processing with the data obtained with physical transducers during laboratory load tests. In order to test the proposed solutions several load tests were carried out in partnership with researchers from the Civil Engineering Department at Universidade Nova de Lisboa (UNL).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Instituto Politécnico de Lisboa (IPL) e Instituto Superior de Engenharia de Lisboa (ISEL)apoio concedido pela bolsa SPRH/PROTEC/67580/2010, que apoiou parcialmente este trabalho

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, the consumption of goods and services on the Internet are increasing in a constant motion. Small and Medium Enterprises (SMEs) mostly from the traditional industry sectors are usually make business in weak and fragile market sectors, where customized products and services prevail. To survive and compete in the actual markets they have to readjust their business strategies by creating new manufacturing processes and establishing new business networks through new technological approaches. In order to compete with big enterprises, these partnerships aim the sharing of resources, knowledge and strategies to boost the sector’s business consolidation through the creation of dynamic manufacturing networks. To facilitate such demand, it is proposed the development of a centralized information system, which allows enterprises to select and create dynamic manufacturing networks that would have the capability to monitor all the manufacturing process, including the assembly, packaging and distribution phases. Even the networking partners that come from the same area have multi and heterogeneous representations of the same knowledge, denoting their own view of the domain. Thus, different conceptual, semantic, and consequently, diverse lexically knowledge representations may occur in the network, causing non-transparent sharing of information and interoperability inconsistencies. The creation of a framework supported by a tool that in a flexible way would enable the identification, classification and resolution of such semantic heterogeneities is required. This tool will support the network in the semantic mapping establishments, to facilitate the various enterprises information systems integration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Field Lab in Entrepreneurial Innovative Ventures

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Currently, Portugal assumes itself as a democratic rule of substantive law State, sustained by a legal system seeking the right balance between the guarantee of fundamental rights and freedoms constitutional foreseen in Portugal’s Fundamental Law and criminal persecution. The architecture of the penal code lies with, roughly speaking, a accusatory basic structure, “deliberately attached to one of the most remarkable achievements of the civilizational democratic progress, and by obedience to the constitutional commandment”, in balance with the official investigation principle, valid both for the purpose of prosecution and trial. Regarding the principle of non self-incrimination - nemo tenetur se ipsum accusare, briefly defined as the defendant’s right of not being obliged to contribute to the self-incrimination, it should be stressed that there isn’t an explicit consecration in the Portuguese Constitution, being commonly accepted in an implicit constitutional prediction and deriving from other constitutional rights and principles, first and foremost, the meaning and scope of the concept of democratic rule of Law State, embedded in the Fundamental Law, and in the guidelines of the constitutional principles of human person dignity, freedom of action and the presumption of innocence. In any case, about the (in) applicability of the principle of the prohibition of self-incrimination to the Criminal Police Bodies in the trial hearing in Court, and sharing an idea of Guedes Valente, the truth is that the exercise of criminal action must tread a transparent path and non-compliant with methods to obtain evidence that violate the law, the public order or in violation of democratic principles and loyalty (Guedes Valente, 2013, p. 484). Within the framework of the penal process relating to the trial, which is assumed as the true phase of the process, the witness represents a relevant figure for the administration of criminal justice, for the testimonial proof is, in the idea of Othmar Jauernig, the worst proof of evidence, but also being the most frequent (Jauernig, 1998, p. 289). As coadjutant of the Public Prosecutor and, in specific cases, the investigating judge, the Criminal Police Bodies are invested with high responsibility, being "the arms and eyes of Judicial Authorities in pursuing the criminal investigation..." which has as ultimate goal the fulfillment of the Law pursuing the defense of society" (Guedes Valente, 2013, p. 485). It is in this context and as a witness that, throughout operational career, the Criminal Police Bodies are required to be at the trial hearing and clarify the Court with its view about the facts relating to occurrences of criminal context, thus contributing very significantly and, in some cases, decisively for the proper administration of the portuguese criminal justice. With regards to the intervention of Criminal Police Bodies in the trial hearing in Court, it’s important that they pay attention to a set of standards concerning the preparation of the testimony, the very provision of the testimony and, also, to its conclusion. Be emphasized that these guidelines may become crucial for the quality of the police testimony at the trial hearing, thus leading to an improvement of the enforcement of justice system. In this vein, while preparing the testimony, the Criminal Police Bodies must present itself in court with proper clothing, to read before and carefully the case files, to debate the facts being judged with other Criminal Police Bodies and prepare potential questions. Later, while giving his testimony during the trial, the Criminal Police Bodies must, summing up, to take the oath in a convincing manner, to feel comfortable, to start well by convincingly answering the first question, keep an attitude of serenity, to adopt an attitude of collaboration, to avoid the reading of documents, to demonstrate deference and seriousness before the judicial operators, to use simple and objective language, to adopt a fluent speech, to use nonverbal language correctly, to avoid spontaneity responding only to what is asked, to report only the truth, to avoid hesitations and contradictions, to be impartial and to maintain eye contact with the judge. Finally, at the conclusion of the testimony, the Criminal Police Bodies should rise in a smooth manner, avoiding to show relief, resentment or satisfaction, leaving a credible and professional image and, without much formality, requesting the judge permission to leave the courtroom. As final note, it’s important to stress that "The intervention of the Police Criminal Bodies in the trial hearing in Court” encloses itself on a theme of crucial importance not only for members of the Police and Security Forces, who must welcome this subject with the utmost seriousness and professionalism, but also for the proper administration of the criminal justice system in Portugal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RESUMO:O conceito de doenças raras como entidade própria começou a ser divulgado na comunidade médica no início deste século. A perspectiva de congregar múltiplas patologias, com características diferentes, valorizando a baixa frequência com que ocorrem na população interessou a comunidade científica, famílias, indústria e serviços de saúde. Esperava-se encontrar estratégias para melhorar a qualidade dos cuidados de saúde prestados a estes doentes. Uma vez que a informação científica sobre doenças raras está dispersa por diversas fontes o primeiro grande desafio foi sistematizar de forma a obter o “estado da arte”. A investigação que decorreu entre 2001 e 2010 teve como objectivo principal a caracterização dos doentes e das doenças raras numa população com características restritas mas não fechada como é o caso da ilha de S. Miguel nos Açores. Foram identificados 467 doentes a partir de várias fontes e monitorizado o nascimento de recém-nascidos com doença rara durante 10 anos. A prevalência das doenças raras encontrada na ilha de S. Miguel foi de 0,34% e a inerente à definição de doença rara foi de 6 % a 8 % da população na União Europeia. A diferença encontrada poderá decorrer de se ter sobrestimado o verdadeiro valor da prevalência das doenças raras na União Europeia. A incidência de doenças raras determinada na amostra foi de 0,1% e a taxa de mortalidade por causa específica foi de 0,14‰. O diagnóstico foi confirmado por técnicas laboratoriais de citogenética ou genética molecular em 43% dos doentes da amostra. Não foi identificado nenhum agregado populacional com doença rara para além do já conhecido para a DMJ. A criação de uma metodologia de estudo implicou a construção de um registo de doentes. Para tal foi utilizado o conhecimento adquirido anteriormente sobre uma doença rara que serviu de paradigma: a doença de Machado-Joseph. Na sequência dos resultados obtidos foi considerado útil a introdução de variáveis como a figura do cuidador, o cônjuge, o número de filhos do casal, a data da primeira consulta de Genética, o tempo decorrido entre o início dos sintomas e o acesso à consulta de Genética e entre esta actividade e dispor do diagnóstico para melhor compreender o contexto de vida destes doentes na perspectiva de poderem vir a ser incorporadas como indicadores. ----------- ABSTRACT: The concept of rare diseases as a condition began to be disclosed in the medical community at the beginning of this century.The prospect of bringing together multiple pathologies, with different features, enhancing the low frequency with which they occur in the population interested the scientific community, families, industry and health care services. The aim was to find strategies to improve the quality of care provided to these patients. Given that the scientific information on rare diseases is spread out across several sources the first major challenge was to systematize in order to get the "state of the art". The research took place between 2001 and 2010 and had as its main objective the characterization of patients and rare diseases in a population with specific features, but not confined, like in the case of the São Miguel Island in Azores. During 10 years were identified 467 patients from multiple sources and were observed the newborns with rare diseases. Prevalence of rare diseases found in the São Miguel Island was 0,34% compared to the 6% to 8% by definition of rare disease in the population in European Union. This discrepancy may be explained by a likely frequency of overrated rare diseases in European Union. The incidence of rare diseases in the sample was 0,1% and the specific mortality rate was 0,14 ‰. This diagnosis was confirmed by cytogenetic or molecular genetics analysis in 43% of patients in the sample. No population cluster was identified with rare disease besides the already known for Machado-Joseph Disease. The methodology for the study involved the construction of a database of patients. For such purpose it was used previously acquired knowledge on a rare disease paradigm: the Machado-Joseph disease. It was useful to introduce the following variables to properly establish the results: caregiver, spouse, number of children, date of first Genetics appointment, elapsed time between onset of symptoms and access to first appointment as well as this and the final diagnosis to better understand the context of life of these patients in order to incorporate them as rates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wine Tourism is gaining importance in today’s world and more destinations and establishments have been arising. After understanding the importance of this economic activity and the factors it must have to succeed, a new project was conceived for Central Alentejo taking into account its potential. This project is an example of how to take advantage of Wine Tourism in wine regions that are underexplored, such as Aldeias de Montoito, the village near Redondo to which a Business Plan will be created, explaining the strategies to pursue in order to have a successful Wine Tourism destination.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Double degree

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Different aspects of hepatosplenic schistosomiasis are revisited here. Manson's schistosomiasis causes periportal fibrosis and portal hypertension in approximately 6% of infected subjects, usually with preservation of their hepatic function. The assessment of liver involvement is of major importance in determining the prognosis and risk of complications from schistosomiasis, such as upper digestive bleeding secondary to variceal rupture. For many years, the diagnosis of hepatosplenic schistosomiasis and liver fibrosis was made by abdominal palpation and the finding of liver and/or spleen enlargement. However, there is no consensus regarding the clinical parameters of the liver and spleen to be considered in this physical evaluation. For the last three decades, abdominal ultrasound (US) has become the best imaging technique to evaluate liver fibrosis caused by schistosomiasis mansoni. However, US is a subjective procedure and is therefore examiner-dependent. Magnetic resonance imaging (MRI) findings have provided valuable information in addition to ultrasound and clinical examination. The combination of a comprehensive history and physical examination, basic laboratory tests (a stool examination for Schistosoma mansoni eggs and a blood cell count), biomarkers for liver fibrosis/portal hypertension and imaging methods seem to offer the best approach for evaluating patients with this disease. In situations where research is involved or in patients with severe disease, MRI may be considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

According to a recent Eurobarometer survey (2014), 68% of Europeans tend not to trust national governments. As the increasing alienation of citizens from politics endangers democracy and welfare, governments, practitioners and researchers look for innovative means to engage citizens in policy matters. One of the measures intended to overcome the so-called democratic deficit is the promotion of civic participation. Digital media proliferation offers a set of novel characteristics related to interactivity, ubiquitous connectivity, social networking and inclusiveness that enable new forms of societal-wide collaboration with a potential impact on leveraging participative democracy. Following this trend, e-Participation is an emerging research area that consists in the use of Information and Communication Technologies to mediate and transform the relations among citizens and governments towards increasing citizens’ participation in public decision-making. However, despite the widespread efforts to implement e-Participation through research programs, new technologies and projects, exhaustive studies on the achieved outcomes reveal that it has not yet been successfully incorporated in institutional politics. Given the problems underlying e-Participation implementation, the present research suggested that, rather than project-oriented efforts, the cornerstone for successfully implementing e-Participation in public institutions as a sustainable added-value activity is a systematic organisational planning, embodying the principles of open-governance and open-engagement. It further suggested that BPM, as a management discipline, can act as a catalyst to enable the desired transformations towards value creation throughout the policy-making cycle, including political, organisational and, ultimately, citizen value. Following these findings, the primary objective of this research was to provide an instrumental model to foster e-Participation sustainability across Government and Public Administration towards a participatory, inclusive, collaborative and deliberative democracy. The developed artefact, consisting in an e-Participation Organisational Semantic Model (ePOSM) underpinned by a BPM-steered approach, introduces this vision. This approach to e-Participation was modelled through a semi-formal lightweight ontology stack structured in four sub-ontologies, namely e-Participation Strategy, Organisational Units, Functions and Roles. The ePOSM facilitates e-Participation sustainability by: (1) Promoting a common and cross-functional understanding of the concepts underlying e-Participation implementation and of their articulation that bridges the gap between technical and non-technical users; (2) Providing an organisational model which allows a centralised and consistent roll-out of strategy-driven e-Participation initiatives, supported by operational units dedicated to the execution of transformation projects and participatory processes; (3) Providing a standardised organisational structure, goals, functions and roles related to e-Participation processes that enhances process-level interoperability among government agencies; (4) Providing a representation usable in software development for business processes’ automation, which allows advanced querying using a reasoner or inference engine to retrieve concrete and specific information about the e-Participation processes in place. An evaluation of the achieved outcomes, as well a comparative analysis with existent models, suggested that this innovative approach tackling the organisational planning dimension can constitute a stepping stone to harness e-Participation value.