987 resultados para concept extraction
Resumo:
BACKGROUND: The aims of the study were to evaluate the prevalence of acute coronary syndrome (ACS) among patients presenting with atypical chest pain who are evaluated for acute aortic syndrome (AAS) or pulmonary embolism (PE) with computed tomoangiography (CTA) and discuss the rationale for the use of triple rule-out (TRO) protocol for triaging these patients. METHODS: This study is a retrospective analysis of patients presenting with atypical chest pain and evaluated with thoracic (CTA), for suspicion of AAS/PE. Two physicians reviewed patient files for demographic characteristics, initial CT and final clinical diagnosis. Patients were classified according to CTA finding into AAS, PE and other diagnoses and according to final clinical diagnosis into AAS, PE, ACS and other diagnoses. RESULTS: Four hundred and sixty-seven patients were evaluated: 396 (84.8%) patients for clinical suspicion of PE and 71 (15.2%) patients for suspicion of AAS. The prevalence of ACS and AAS was low among the PE patients: 5.5% and 0.5% respectively (P = 0.0001), while the prevalence of ACS and PE was 18.3% and 5.6% among AAS patients (P = 0.14 and P = 0.34 respectively). CONCLUSION: The prevalence of ACS and AAS among patients suspected clinically of having PE is limited while the prevalence of ACS and PE among patients suspected clinically of having AAS is significant. Accordingly patients suspected for PE could be evaluated with dedicated PE CTA while those suspected for AAS should still be triaged using TRO protocol.
Resumo:
The goal of this study was to investigate the performance of 3D synchrotron differential phase contrast (DPC) imaging for the visualization of both macroscopic and microscopic aspects of atherosclerosis in the mouse vasculature ex vivo. The hearts and aortas of 2 atherosclerotic and 2 wild-type control mice were scanned with DPC imaging with an isotropic resolution of 15 μm. The coronary artery vessel walls were segmented in the DPC datasets to assess their thickness, and histological staining was performed at the level of atherosclerotic plaques. The DPC imaging allowed for the visualization of complex structures such as the coronary arteries and their branches, the thin fibrous cap of atherosclerotic plaques as well as the chordae tendineae. The coronary vessel wall thickness ranged from 37.4 ± 5.6 μm in proximal coronary arteries to 13.6 ± 3.3 μm in distal branches. No consistent differences in coronary vessel wall thickness were detected between the wild-type and atherosclerotic hearts in this proof-of-concept study, although the standard deviation in the atherosclerotic mice was higher in most segments, consistent with the observation of occasional focal vessel wall thickening. Overall, DPC imaging of the cardiovascular system of the mice allowed for a simultaneous detailed 3D morphological assessment of both large structures and microscopic details.
Resumo:
This paper reviews the concept of presence in immersive virtual environments, the sense of being there signalled by people acting and responding realistically to virtual situations and events. We argue that presence is a unique phenomenon that must be distinguished from the degree of engagement, involvement in the portrayed environment. We argue that there are three necessary conditions for presence: the (a) consistent low latency sensorimotor loop between sensory data and proprioception; (b) statistical plausibility: images must be statistically plausible in relation to the probability distribution of images over natural scenes. A constraint on this plausibility is the level of immersion;(c) behaviour-response correlations: Presence may be enhanced and maintained over time by appropriate correlations between the state and behaviour of participants and responses within the environment, correlations that show appropriate responses to the activity of the participants. We conclude with a discussion of methods for assessing whether presence occurs, and in particular recommend the approach of comparison with ground truth and give some examples of this.
Can the administration be trusted? An analysis of the concept of trust, applied to the public sector
Resumo:
In the first part of this paper, we present the various academic debates and, where applicable, questions that remain open in the literature, particularly regarding the nature of trust, the distinction between trust and trustworthiness, its role in specific relationships and its relationship to control. We then propose a way of demarcating and operationalizing the concepts of trust and trustworthiness. In the second part, on the basis of the conceptual clarifications we present, we put forward a number of "anchor points" regarding how trust is apprehended in the public sector with regard to the various relations hips that can be studied. Schematically, we distinguish between two types of relations hips in the conceptual approach to trust: on one hand, the trust that citizens, or third parties, place in the State or in various public sector authorities or entities, and on the other hand, trust within the State or the public sector, between its various authorities, entities, and actors. While studies have traditionally focused on citizens' trust in their institutions, the findings, limitations and problems observed in public - sector coordination following the reforms associated with New Public Management have also elicited growing interest in the study of trust in the relationships between the various actors within the public sector. Both the theoretical debates we present and our propositions have been extracted and adapted from an empirical comparative study of coordination between various Swiss public - service organizations and their politico - administrative authority. Using the analysis model developed for this specific relationship, between various actors within the public service, and in the light of theoretical elements on which development of this model was based, we propose some avenues for further study - questions that remain open - regarding the consideration and understanding of citizens' trust in the public sector.
Resumo:
In this study we used market settlement prices of European call options on stock index futures to extract implied probability distribution function (PDF). The method used produces a PDF of returns of an underlying asset at expiration date from implied volatility smile. With this method, the assumption of lognormal distribution (Black-Scholes model) is tested. The market view of the asset price dynamics can then be used for various purposes (hedging, speculation). We used the so called smoothing approach for implied PDF extraction presented by Shimko (1993). In our analysis we obtained implied volatility smiles from index futures markets (S&P 500 and DAX indices) and standardized them. The method introduced by Breeden and Litzenberger (1978) was then used on PDF extraction. The results show significant deviations from the assumption of lognormal returns for S&P500 options while DAX options mostly fit the lognormal distribution. A deviant subjective view of PDF can be used to form a strategy as discussed in the last section.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
The oxidative potential (OP) of particulate matter has been proposed as a toxicologically relevant metric. This concept is already frequently used for hazard characterization of ambient particles but it is still seldom applied in the occupational field. The objective of this study was to assess the OP in two different types of workplaces and to investigate the relationship between the OP and the physicochemical characteristics of the collected particles. At a toll station, at the entrance of a tunnel ('Tunnel' site), and at three different mechanical yards ('Depot' sites), we assessed particle mass (PM4 and PM2.5 and size distribution), number and surface area, organic and elemental carbon, polycyclic aromatic hydrocarbon (PAH), and four quinones as well as iron and copper concentration. The OP was determined directly on filters without extraction by using the dithiothreitol assay (DTT assay-OP(DTT)). The averaged mass concentration of respirable particles (PM4) at the Tunnel site was about twice the one at the Depot sites (173±103 and 90±36 µg m(-3), respectively), whereas the OP(DTT) was practically identical for all the sites (10.6±7.2 pmol DTT min(-1) μg(-1) at the Tunnel site; 10.4±4.6 pmol DTT min(-1) μg(-1) at the Depot sites). The OP(DTT) of PM4 was mostly present on the smallest PM2.5 fraction (OP(DTT) PM2.5: 10.2±8.1 pmol DTT min(-1) μg(-1); OP(DTT) PM4: 10.5±5.8 pmol DTT min(-1) μg(-1) for all sites), suggesting the presence of redox inactive components in the PM2.5-4 fraction. Although the reactivity was similar at the Tunnel and Depot sites irrespective of the metric chosen (OP(DTT) µg(-1) or OP(DTT) m(-3)), the chemicals associated with OP(DTT) were different between the two types of workplaces. The organic carbon, quinones, and/or metal content (Fe, Cu) were strongly associated with the DTT reactivity at the Tunnel site whereas only Fe and PAH were associated (positively and negatively, respectively) with this reactivity at the Depot sites. These results demonstrate the feasibility of measuring of the OP(DTT) in occupational environments and suggest that the particulate OP(DTT) is integrative of different physicochemical properties. This parameter could be a potentially useful exposure proxy for investigating particle exposure-related oxidative stress and its consequences. Further research is needed mostly to demonstrate the association of OP(DTT) with relevant oxidative endpoints in humans exposed to particles.
Resumo:
Specific demand for service concept creation has come about from industrial organizations’ desire to find new and innovative ways to differentiate their offering by increasing the level of customer services. Providers of professional services have also demanded new concepts and approaches for their businesses as these industries have become increasingly competitive. Firms are now seeking better ways to understand and segment their customers, to ensure the delivery of quality services and strengthen their position in aggressively competitive markets. This thesis is intended to provide management consulting companies with a new work method that enables service concept creation in a business-to-business environment. The model defines the service concept as a combination of delivered value and the target customers; the third-dimension operating model is brought to the new system in testing of the service concept creation guidelines in the target organization. For testing, service concepts for a management consulting company are created. Service concepts are designed to serve as a solid foundation for further service improvements. Recommendations and proposals for further action related to service development in the target organization are presented, and recommendations to further improve the model created are given.
Resumo:
Bandura (1986) developed the concept of moral disengagement to explain how individuals can engage in detrimental behavior while experiencing low levels of negative feelings such as guilt-feelings. Most of the research conducted on moral disengagement investigated this concept as a global concept (e.g., Bandura, Barbaranelli, Caprara, & Pastorelli, 1996; Moore, Detert, Klebe Treviño, Baker, & Mayer, 2012) while Bandura (1986, 1990) initially developed eight distinct mechanisms of moral disengagement grouped into four categories representing the various means through which moral disengagement can operate. In our work, we propose to develop measures of this concept based on its categories, namely rightness of actions, rejection of personal responsibility, distortion of negative consequences, and negative perception of the victims, and which is not specific a particular area of research. Through our measures, we aim at better understanding the cognitive process leading individuals to behave unethically by investigating which category plays a role in explaining unethical behavior depending on the situations in which individuals are. To this purpose, we conducted five studies to develop the measures and to test its predictive validity. Particularly, we assessed the ability of the newly developed measures to predict two types of unethical behaviors, i.e. discriminatory behavior and cheating behavior. Confirmatory Factor analyses demonstrated a good fit of the model and findings generally supported our predictions.
Resumo:
BACKGROUND: Low-dose, Visudyne®-mediated photodynamic therapy (photo-induction) was shown to selectively enhance tumor vessel transport causing increased uptake of systemically administered chemotherapy in various tumor types grown on rodent lungs. The present experiments explore the efficacy of photo-induced vessel modulation combined to intravenous (IV) liposomal cisplatin (Lipoplatin®) on rodent lung tumors and the feasibility/toxicity of this approach in porcine chest cavities. MATERIAL AND METHODS: Three groups of Fischer rats underwent orthotopic sarcoma (n = 14), mesothelioma (n = 14), or adenocarcinoma (n = 12) implantation on the left lung. Half of the animals of each group had photo-induction (0.0625 mg/kg Visudyne®, 10 J/cm(2) ) followed by IV administration of Lipoplatin® (5 mg/kg) and the other half received Lipoplatin® without photo-induction. Then, two groups of minipigs underwent intrapleural thoracoscopic (VATS) photo-induction (0.0625 mg/kg Visudyne®; 30 J/cm(2) hilum; 10 J/cm(2) apex/diaphragm) with in situ light dosimetry in combination with IV Lipoplatin® administration (5 mg/kg). Protocol I (n = 6) received Lipoplatin® immediately after light delivery and Protocol II (n = 9) 90 minutes before light delivery. Three additional animals received Lipoplatin® and VATS pleural biopsies but no photo-induction (controls). Lipoplatin® concentrations were analyzed in blood and tissues before and at regular intervals after photo-induction using inductively coupled plasma mass spectrometry. RESULTS: Photo-induction selectively increased Lipoplatin® uptake in all orthotopic tumors. It significantly increased the ratio of tumor to lung Lipoplatin® concentration in sarcoma (P = 0.0008) and adenocarcinoma (P = 0.01) but not in mesothelioma, compared to IV drug application alone. In minipigs, intrapleural photo-induction combined to systemic Lipoplatin® was well tolerated with no toxicity at 7 days for both treatment protocols. The pleural Lipoplatin® concentrations were not significantly different at 10 and 30 J/cm(2) locations but they were significantly higher in protocol I compared to II (2.37 ± 0.7 vs. 1.37 ± 0.7 ng/mg, P < 0.001). CONCLUSION: Visudyne®-mediated photo-induction selectively enhances the uptake of IV administered Lipoplatin® in rodent lung tumors. Intrapleural VATS photo-induction with identical treatment conditions combined to IV Lipoplatin chemotherapy is feasible and well tolerated in a porcine model. Lasers Surg. Med. 47:807-816, 2015. © 2015 Wiley Periodicals, Inc.
Resumo:
Customer relationship management has been one essential part of marketing for over 20 years. Today’s business environment is fast changing, international and highly competitive, and that is why the most important factor for long-term profitability is one-to-one customer relationships. However, managing relationships and serving customers that are profitable has been always challenging. In this thesis the objective was to define the main obstacles that the case company must overcome to succeed in CRM. Possible solutions have also been defined. The main elements of the implementation i.e. people, processes and technologies, can clearly be found behind these matters and solutions. This thesis also presents theoretical information about CRM and it is meant to act as a guide book inside the organisation to spread information about CRM for those who are not so familiar with the topic.