977 resultados para Multi-Domain


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a comparative study of three closely related Bayesian models for unsupervised document level sentiment classification, namely, the latent sentiment model (LSM), the joint sentiment-topic (JST) model, and the Reverse-JST model. Extensive experiments have been conducted on two corpora, the movie review dataset and the multi-domain sentiment dataset. It has been found that while all the three models achieve either better or comparable performance on these two corpora when compared to the existing unsupervised sentiment classification approaches, both JST and Reverse-JST are able to extract sentiment-oriented topics. In addition, Reverse-JST always performs worse than JST suggesting that the JST model is more appropriate for joint sentiment topic detection.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose a novel framework where an initial classifier is learned by incorporating prior information extracted from an existing sentiment lexicon. Preferences on expectations of sentiment labels of those lexicon words are expressed using generalized expectation criteria. Documents classified with high confidence are then used as pseudo-labeled examples for automatical domain-specific feature acquisition. The word-class distributions of such self-learned features are estimated from the pseudo-labeled examples and are used to train another classifier by constraining the model's predictions on unlabeled instances. Experiments on both the movie review data and the multi-domain sentiment dataset show that our approach attains comparable or better performance than exiting weakly-supervised sentiment classification methods despite using no labeled documents.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Secure Access For Everyone (SAFE), is an integrated system for managing trust

using a logic-based declarative language. Logical trust systems authorize each

request by constructing a proof from a context---a set of authenticated logic

statements representing credentials and policies issued by various principals

in a networked system. A key barrier to practical use of logical trust systems

is the problem of managing proof contexts: identifying, validating, and

assembling the credentials and policies that are relevant to each trust

decision.

SAFE addresses this challenge by (i) proposing a distributed authenticated data

repository for storing the credentials and policies; (ii) introducing a

programmable credential discovery and assembly layer that generates the

appropriate tailored context for a given request. The authenticated data

repository is built upon a scalable key-value store with its contents named by

secure identifiers and certified by the issuing principal. The SAFE language

provides scripting primitives to generate and organize logic sets representing

credentials and policies, materialize the logic sets as certificates, and link

them to reflect delegation patterns in the application. The authorizer fetches

the logic sets on demand, then validates and caches them locally for further

use. Upon each request, the authorizer constructs the tailored proof context

and provides it to the SAFE inference for certified validation.

Delegation-driven credential linking with certified data distribution provides

flexible and dynamic policy control enabling security and trust infrastructure

to be agile, while addressing the perennial problems related to today's

certificate infrastructure: automated credential discovery, scalable

revocation, and issuing credentials without relying on centralized authority.

We envision SAFE as a new foundation for building secure network systems. We

used SAFE to build secure services based on case studies drawn from practice:

(i) a secure name service resolver similar to DNS that resolves a name across

multi-domain federated systems; (ii) a secure proxy shim to delegate access

control decisions in a key-value store; (iii) an authorization module for a

networked infrastructure-as-a-service system with a federated trust structure

(NSF GENI initiative); and (iv) a secure cooperative data analytics service

that adheres to individual secrecy constraints while disclosing the data. We

present empirical evaluation based on these case studies and demonstrate that

SAFE supports a wide range of applications with low overhead.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Magnetotactic bacteria biomineralize magnetic minerals with precisely controlled size, morphology, and stoichiometry. These cosmopolitan bacteria are widely observed in aquatic environments. If preserved after burial, the inorganic remains of magnetotactic bacteria act as magnetofossils that record ancient geomagnetic field variations. They also have potential to provide paleoenvironmental information. In contrast to conventional magnetofossils, giant magnetofossils (most likely produced by eukaryotic organisms) have only been reported once before from Paleocene-Eocene Thermal Maximum (PETM; 55.8 Ma) sediments on the New Jersey coastal plain. Here, using transmission electron microscopic observations, we present evidence for abundant giant magnetofossils, including previously reported elongated prisms and spindles, and new giant bullet-shaped magnetite crystals, in the Southern Ocean near Antarctica, not only during the PETM, but also shortly before and after the PETM. Moreover, we have discovered giant bullet-shaped magnetite crystals from the equatorial Indian Ocean during the Mid-Eocene Climatic Optimum (~40 Ma). Our results indicate a more widespread geographic, environmental, and temporal distribution of giant magnetofossils in the geological record with a link to "hyperthermal" events. Enhanced global weathering during hyperthermals, and expanded suboxic diagenetic environments, probably provided more bioavailable iron that enabled biomineralization of giant magnetofossils. Our micromagnetic modelling indicates the presence of magnetic multi-domain (i.e., not ideal for navigation) and single domain (i.e., ideal for navigation) structures in the giant magnetite particles depending on their size, morphology and spatial arrangement. Different giant magnetite crystal morphologies appear to have had different biological functions, including magnetotaxis and other non-navigational purposes. Our observations suggest that hyperthermals provided ideal conditions for giant magnetofossils, and that these organisms were globally distributed. Much more work is needed to understand the interplay between magnetofossil morphology, climate, nutrient availability, and environmental variability.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hysteresis measurements have been carried out on a suite of ocean-floor basalts with ages ranging from Quaternary to Cretaceous. Approximately linear, yet separate, relationships between coercivity (Bc) and the ratio of saturation remanence/saturation magnetization (Mrs/Ms) are observed for massive doleritic basalts with low-Ti magnetite and for pillow basalts with multi-domain titanomagnetites (with x= 0.6). Even when the MORB has undergone lowtemperature oxidation resulting in titanomaghemite, the parameters are still distinguishable, although offset from the trend for unoxidized multidomain titanomagnetite. The parameters for these iron oxides with different titanium content reveal contrasting trends that can be explained by the different saturation magnetizations of the mineral types. This plot provides a previously underutilized and non-destructive method to detect the presence of low-titanium magnetite in igneous rocks, notably MORB.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Monoclonal antibodies are a class of therapeutic that is an expanding area of the lucrative biopharmaceutical industry. These complex proteins are predominantly produced from large cultures of mammalian cells; the industry standard cell line being Chinese Hamster Ovary (CHO) cells. A number of optimisation strategies have led to antibody titres from CHO cells increasing by a hundred-fold, and it has been proposed that a further bottleneck in biosynthesis is in protein folding and assembly within the secretory pathway. To alleviate this bottleneck, a CHO-derived host cell line was generated by researchers at the pharmaceutical company UCB that stably overexpressed two critical genes: XBP1, a transcription factor capable of expanding the endoplasmic reticulum and upregulating protein chaperones; and Ero1α, an oxidase that replenishes the machinery of disulphide bond formation. This host cell line, named CHO-S XE, was confirmed to have a high yield of secreted antibody. The work presented in this thesis further characterises CHO-S XE, with the aim of using the information gained to lead the generation of novel host cell lines with more optimal characteristics than CHO-S XE. In addition to antibodies, it was found that CHO-S XE had improved production of two other secreted proteins: one with a simple tertiary structure and one complex multi-domain protein; and higher levels of a number of endogenous protein chaperones. As a more controlled system of gene expression to unravel the specific roles of XBP1 and Ero1α in the secretory properties of CHO-S XE, CHO cells with inducible overexpression of XBP1, Ero1α, or a third gene involved in the Unfolded Protein Response, GADD34, were generated. From these cell lines, it was shown that more antibody was secreted by cells with induced overexpression of XBP1; however, Ero1α and GADD34 overexpression did not improve antibody yield. Further investigation revealed that endogenous XBP1 splicing was downregulated in the presence of an abundance of the active form of XBP1. This result indicated a novel aspect of the regulation of the activity of IRE1, the stress-induced endoribonuclease responsible for XBP1 splicing. Overall, the work described in this thesis confirms that the overexpression of XBP1 has an enhancing effect on the secretory properties of CHO cells; information which could contribute to the development of host cells with a greater capacity for antibody production.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The performance, energy efficiency and cost improvements due to traditional technology scaling have begun to slow down and present diminishing returns. Underlying reasons for this trend include fundamental physical limits of transistor scaling, the growing significance of quantum effects as transistors shrink, and a growing mismatch between transistors and interconnects regarding size, speed and power. Continued Moore's Law scaling will not come from technology scaling alone, and must involve improvements to design tools and development of new disruptive technologies such as 3D integration. 3D integration presents potential improvements to interconnect power and delay by translating the routing problem into a third dimension, and facilitates transistor density scaling independent of technology node. Furthermore, 3D IC technology opens up a new architectural design space of heterogeneously-integrated high-bandwidth CPUs. Vertical integration promises to provide the CPU architectures of the future by integrating high performance processors with on-chip high-bandwidth memory systems and highly connected network-on-chip structures. Such techniques can overcome the well-known CPU performance bottlenecks referred to as memory and communication wall. However the promising improvements to performance and energy efficiency offered by 3D CPUs does not come without cost, both in the financial investments to develop the technology, and the increased complexity of design. Two main limitations to 3D IC technology have been heat removal and TSV reliability. Transistor stacking creates increases in power density, current density and thermal resistance in air cooled packages. Furthermore the technology introduces vertical through silicon vias (TSVs) that create new points of failure in the chip and require development of new BEOL technologies. Although these issues can be controlled to some extent using thermal-reliability aware physical and architectural 3D design techniques, high performance embedded cooling schemes, such as micro-fluidic (MF) cooling, are fundamentally necessary to unlock the true potential of 3D ICs. A new paradigm is being put forth which integrates the computational, electrical, physical, thermal and reliability views of a system. The unification of these diverse aspects of integrated circuits is called Co-Design. Independent design and optimization of each aspect leads to sub-optimal designs due to a lack of understanding of cross-domain interactions and their impacts on the feasibility region of the architectural design space. Co-Design enables optimization across layers with a multi-domain view and thus unlocks new high-performance and energy efficient configurations. Although the co-design paradigm is becoming increasingly necessary in all fields of IC design, it is even more critical in 3D ICs where, as we show, the inter-layer coupling and higher degree of connectivity between components exacerbates the interdependence between architectural parameters, physical design parameters and the multitude of metrics of interest to the designer (i.e. power, performance, temperature and reliability). In this dissertation we present a framework for multi-domain co-simulation and co-optimization of 3D CPU architectures with both air and MF cooling solutions. Finally we propose an approach for design space exploration and modeling within the new Co-Design paradigm, and discuss the possible avenues for improvement of this work in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabajo investigativo busca aportar a la literatura sobre las tácticas de influencia en el liderazgo. Surge como una aplicación, a dos casos específicos, del proyecto de investigación “Los mecanismos de influencia en la relación de liderazgo”, desarrollado por el profesor Juan Javier Saavedra Mayorga e inscrito en la línea de investigación en Estudios Organizacionales del Grupo de Investigación en Dirección y Gerencia. La investigación tiene como objetivo fundamental identificar las tácticas de influencia que utilizan dos líderes organizacionales en su trato cotidiano con sus colaboradores, así como la reacción de estos últimos ante dichas tácticas. El proyecto parte de una revisión teórica sobre tres elementos: el liderazgo, la influencia y el poder, y las reacciones de los colaboradores frente a las tácticas de influencia utilizadas por el líder. La estrategia metodológica empleada es el estudio de caso. El trabajo de campo se desarrolló en dos organizaciones: Microscopios y Equipos Especiales S.A.S. y Tecniespectro S.A.S. La técnica de recolección de información es la entrevista semi estructurada, y el método de análisis de información es el análisis de contenido temático.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabajo tiene como objetivo fundamental identificar las tácticas de influencia que utiliza el señor Carlos Pérez, gerente y socio principal de G. & M., en su trato cotidiano con sus colaboradores, así como la reacción de estos últimos ante dichas tácticas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent research trends in computer-aided drug design have shown an increasing interest towards the implementation of advanced approaches able to deal with large amount of data. This demand arose from the awareness of the complexity of biological systems and from the availability of data provided by high-throughput technologies. As a consequence, drug research has embraced this paradigm shift exploiting approaches such as that based on networks. Indeed, the process of drug discovery can benefit from the implementation of network-based methods at different steps from target identification to drug repurposing. From this broad range of opportunities, this thesis is focused on three main topics: (i) chemical space networks (CSNs), which are designed to represent and characterize bioactive compound data sets; (ii) drug-target interactions (DTIs) prediction through a network-based algorithm that predicts missing links; (iii) COVID-19 drug research which was explored implementing COVIDrugNet, a network-based tool for COVID-19 related drugs. The main highlight emerged from this thesis is that network-based approaches can be considered useful methodologies to tackle different issues in drug research. In detail, CSNs are valuable coordinate-free, graphically accessible representations of structure-activity relationships of bioactive compounds data sets especially for medium-large libraries of molecules. DTIs prediction through the random walk with restart algorithm on heterogeneous networks can be a helpful method for target identification. COVIDrugNet is an example of the usefulness of network-based approaches for studying drugs related to a specific condition, i.e., COVID-19, and the same ‘systems-based’ approaches can be used for other diseases. To conclude, network-based tools are proving to be suitable in many applications in drug research and provide the opportunity to model and analyze diverse drug-related data sets, even large ones, also integrating different multi-domain information.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Malignant Pleural Mesothelioma (MPM) is a very aggressive cancer whose incidence is growing worldwide. MPM escapes the classical models of carcinogenesis and lacks a distinctive genetic fingerprint, keeping obscure the molecular events that lead to tumorigenesis. This severely impacts on the limited therapeutic options and on the lack of specific biomarkers, concurring to make MPM one of the deadliest cancers. Here we combined a functional genome-wide loss of function CRISPR/Cas9 screening with patients’ transcriptomic and clinical data, to identify genes essential for MPM progression. Besides, we explored the role of non-coding RNAs to MPM progression by analysing gene expression profiles and clinical data from the MESO-TCGA dataset. We identified TRIM28 and the lncRNA LINC00941 as new vulnerabilities of MPM, associated with disease aggressiveness and bad outcome of patients. TRIM28 is a multi-domain protein involved in many processes, including transcription regulation. We showed that TRIM28 silencing impairs MPM cells’ growth and clonogenicity by blocking cells in mitosis. RNA-seq profiling showed that TRIM28 loss abolished the expression of major mitotic players. Our data suggest that TRIM28 is part of the B-MYB/FOXM1-MuvB complex that specifically drives the activation of mitotic genes, keeping the time of mitosis. In parallel, we found LINC00941 as strongly associated with reduced survival probability in MPM patients. LINC00941 KD profoundly reduced MPM cells’ growth, migration and invasion. This is accompanied by changes in morphology, cytoskeleton organization and cell-cell adhesion properties. RNA-seq profiling showed that LINC00941 KD impacts crucial functions of MPM, including HIF1α signalling. Collectively these data provided new insights into MPM biology and demonstrated that the integration of functional screening with patients’ clinical data is a powerful tool to highlight new non-genetic cancer dependencies that associate to a bad outcome in vivo, paving the way to new MPM-oriented targeted strategies and prognostic tools to improve patients risk-based stratification.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As a common reference for many in-development standards and execution frameworks, special attention is being paid to Service-Oriented Architectures. SOAs modeling, however, is an area in which a consensus has not being achieved. Currently, standardization organizations are defining proposals to offer a solution to this problem. Nevertheless, until very recently, non-functional aspects of services have not been considered for standardization processes. In particular, there exists a lack of a design solution that permits an independent development of the functional and non-functional concerns of SOAs, allowing that each concern be addressed in a convenient manner in early stages of the development, in a way that could guarantee the quality of this type of systems. This paper, leveraging on previous work, presents an approach to integrate security-related non-functional aspects (such as confidentiality, integrity, and access control) in the development of services.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A technique is presented for the development of a high precision and resolution Mean Sea Surface (MSS) model. The model utilises Radar altimetric sea surface heights extracted from the geodetic phase of the ESA ERS-1 mission. The methodology uses a modified Le Traon et al. (1995) cubic-spline fit of dual ERS-1 and TOPEX/Poseidon crossovers for the minimisation of radial orbit error. The procedure then uses Fourier domain processing techniques for spectral optimal interpolation of the mean sea surface in order to reduce residual errors within the model. Additionally, a multi-satellite mean sea surface integration technique is investigated to supplement the first model with additional enhanced data from the GEOSAT geodetic mission.The methodology employs a novel technique that combines the Stokes' and Vening-Meinsz' transformations, again in the spectral domain. This allows the presentation of a new enhanced GEOSAT gravity anomaly field.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A two-dimensional, 2D, finite-difference time-domain (FDTD) method is used to analyze two different models of multi-conductor transmission lines (MTL). The first model is a two-conductor MTL and the second is a threeconductor MTL. Apart from the MTL's, a three-dimensional, 3D, FDTD method is used to analyze a three-patch microstrip parasitic array. While the MTL analysis is entirely in time-domain, the microstrip parasitic array is a study of scattering parameter Sn in the frequency-domain. The results clearly indicate that FDTD is an efficient and accurate tool to model and analyze multiconductor transmission line as well as microstrip antennas and arrays.