837 resultados para AML Schema (XSD)
Resumo:
Background: We and others have identified the aldo-keto reductase AKR1C3 as a potential drug target in prostate cancer, breast cancer and leukaemia. As a consequence, significant effort is being invested in the development of AKR1C3-selective inhibitors. Methods: We report the screening of an in-house drug library to identify known drugs that selectively inhibit AKR1C3 over the closely related isoforms AKR1C1, 1C2 and 1C4. This screen initially identified tetracycline as a potential AKR1C3-selective inhibitor. However, mass spectrometry and nuclear magnetic resonance studies identified that the active agent was a novel breakdown product (4-methyl(de-dimethylamine)-tetracycline (4-MDDT)). Results: We demonstrate that, although 4-MDDT enters AML cells and inhibits their AKR1C3 activity, it does not recapitulate the anti-leukaemic actions of the pan-AKR1C inhibitor medroxyprogesterone acetate (MPA). Screens of the NCI diversity set and an independently curated small-molecule library identified several additional AKR1C3-selective inhibitors, none of which had the expected anti-leukaemic activity. However, a pan AKR1C, also identified in the NCI diversity set faithfully recapitulated the actions of MPA. Conclusions: In summary, we have identified a novel tetracycline-derived product that provides an excellent lead structure with proven drug-like qualities for the development of AKR1C3 inhibitors. However, our findings suggest that, at least in leukaemia, selective inhibition of AKR1C3 is insufficient to elicit an anticancer effect and that multiple AKR1C inhibition may be required. © 2014 Cancer Research UK. All rights reserved.
Resumo:
* The work is partially supported by the grant of National Academy of Science of Ukraine for the support of scientific researches by young scientists No 24-7/05, " Розробка Desktop Grid-системи і оптимізація її продуктивності ”.
Resumo:
In the global strategy for preservation genetic resources of farm animals the implementation of information technology is of great importance. In this regards platform independent information tools and approaches for data exchange are needed in order to obtain aggregate values for regions and countries of spreading a separate breed. The current paper presents a XML based solution for data exchange in management genetic resources of farm animals’ small populations. There are specific requirements to the exchanged documents that come from the goal of data analysis. Three main types of documents are distinguished and their XML formats are discussed. DTD and XML Schema for each type are suggested. Some examples of XML documents are given also.
Resumo:
The problem of multi-agent routing in static telecommunication networks with fixed configuration is considered. The problem is formulated in two ways: for centralized routing schema with the coordinator-agent (global routing) and for distributed routing schema with independent agents (local routing). For both schemas appropriate Hopfield neural networks (HNN) are constructed.
Resumo:
2000 Mathematics Subject Classification: 60J80, 60K05.
Resumo:
In recent years, rough set approach computing issues concerning
reducts of decision tables have attracted the attention of many researchers.
In this paper, we present the time complexity of an algorithm
computing reducts of decision tables by relational database approach. Let
DS = (U, C ∪ {d}) be a consistent decision table, we say that A ⊆ C is a
relative reduct of DS if A contains a reduct of DS. Let s =
Resumo:
Prior research on brand extension has provided little evidence on enhancing the evaluation of extremely incongruent extension. Adopting the theoretical framework of schema congruity theory, the author posits that evaluations can be improved if brand personality impressions of both parent brand and extension are complementary. The author coins this as the brand personality complementarity (BPC) principle. Prior to examining BPC effect, cultural-specific brand personality scale was developed to identify universal and indigenous brand personality dimensions. The reason is BPC requires a reliable and valid brand personality scale in order to detect its effect. Following successful identification of the cultural-specific brand personality scale, a total of three experimental studies were done to investigate BPC effect. Specifically, one experimental study identified complementary levels amongst brand personality dimensions, whereas two experimental studies investigated the moderating effect of BPC. Findings from the scale development study reveal that Malaysian brand personality (MBP) scale is a second higher-order factor reflected by first higher-order factors of sophistication, youth, competence, and sincerity. Most importantly, findings from the experimental studies revealed; 1) different BPC levels amongst all possible pairs of MBP dimensions, 2) significant interaction effect of brand extension congruity x BPC, and 3) significant mediation effect of complementarity resolution. Specific findings indicated that when iv text-based stimuli were used to form brand personality impression, even low BPC level improves the evaluations of extremely incongruent extension. However, when visualbased stimuli were used, low BPC level worsen the extension evaluation compared those of the control condition (i.e. without brand personality impression). Implications for both academician and practitioner are discussed.
Resumo:
While some aspects of social processing are shared between humans and other species, some aspects are not. The former seems to apply to merely tracking another's visual perspective in the world (i.e., what a conspecific can or cannot perceive), while the latter applies to perspective taking in form of mentally “embodying” another's viewpoint. Our previous behavioural research had indicated that only perspective taking, but not tracking, relies on simulating a body schema rotation into another's viewpoint. In the current study we employed Magnetoencephalography (MEG) and revealed that this mechanism of mental body schema rotation is primarily linked to theta oscillations in a wider brain network of body-schema, somatosensory and motor-related areas, with the right posterior temporo-parietal junction (pTPJ) at its core. The latter was reflected by a convergence of theta oscillatory power in right pTPJ obtained by overlapping the separately localised effects of rotation demands (angular disparity effect), cognitive embodiment (posture congruence effect), and basic body schema involvement (posture relevance effect) during perspective taking in contrast to perspective tracking. In a subsequent experiment we interfered with right pTPJ processing using dual pulse Transcranial Magnetic Stimulation (dpTMS) and observed a significant reduction of embodied processing. We conclude that right TPJ is the crucial network hub for transforming the embodied self into another's viewpoint, body and/or mind, thus, substantiating how conflicting representations between self and other may be resolved and potentially highlighting the embodied origins of high-level social cognition in general.
Resumo:
Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^
Resumo:
An implementation of Sem-ODB—a database management system based on the Semantic Binary Model is presented. A metaschema of Sem-ODB database as well as the top-level architecture of the database engine is defined. A new benchmarking technique is proposed which allows databases built on different database models to compete fairly. This technique is applied to show that Sem-ODB has excellent efficiency comparing to a relational database on a certain class of database applications. A new semantic benchmark is designed which allows evaluation of the performance of the features characteristic of semantic database applications. An application used in the benchmark represents a class of problems requiring databases with sparse data, complex inheritances and many-to-many relations. Such databases can be naturally accommodated by semantic model. A fixed predefined implementation is not enforced allowing the database designer to choose the most efficient structures available in the DBMS tested. The results of the benchmark are analyzed. ^ A new high-level querying model for semantic databases is defined. It is proven adequate to serve as an efficient native semantic database interface, and has several advantages over the existing interfaces. It is optimizable and parallelizable, supports the definition of semantic userviews and the interoperability of semantic databases with other data sources such as World Wide Web, relational, and object-oriented databases. The query is structured as a semantic database schema graph with interlinking conditionals. The query result is a mini-database, accessible in the same way as the original database. The paradigm supports and utilizes the rich semantics and inherent ergonomics of semantic databases. ^ The analysis and high-level design of a system that exploits the superiority of the Semantic Database Model to other data models in expressive power and ease of use to allow uniform access to heterogeneous data sources such as semantic databases, relational databases, web sites, ASCII files, and others via a common query interface is presented. The Sem-ODB engine is used to control all the data sources combined under a unified semantic schema. A particular application of the system to provide an ODBC interface to the WWW as a data source is discussed. ^
Resumo:
This paper examines the history of schema theory and how culture is incorporated into schema theory. Furthermore, the author argues that cultural schema affects students’ usage of reader-based processing and text-based processing in reading.
Resumo:
Organized crime and illegal economies generate multiple threats to states and societies. But although the negative effects of high levels of pervasive street and organized crime on human security are clear, the relationships between human security, crime, illicit economies, and law enforcement are highly complex. By sponsoring illicit economies in areas of state weakness where legal economic opportunities and public goods are seriously lacking, both belligerent and criminal groups frequently enhance some elements of human security of the marginalized populations who depend on illicit economies for basic livelihoods. Even criminal groups without a political ideology often have an important political impact on the lives of communities and on their allegiance to the State. Criminal groups also have political agendas. Both belligerent and criminal groups can develop political capital through their sponsorship of illicit economies. The extent of their political capital is dependent on several factors. Efforts to defeat belligerent groups by decreasing their financial flows through suppression of an illicit economy are rarely effective. Such measures, in turn, increase the political capital of anti-State groups. The effectiveness of anti-money laundering measures (AML) also remains low and is often highly contingent on specific vulnerabilities of the target. The design of AML measures has other effects, such as on the size of a country’s informal economy. Multifaceted anti-crime strategies that combine law enforcement approaches with targeted socio-economic policies and efforts to improve public goods provision, including access to justice, are likely to be more effective in suppressing crime than tough nailed-fist approaches. For anti-crime policies to be effective, they often require a substantial, but politically-difficult concentration of resources in target areas. In the absence of effective law enforcement capacity, legalization and decriminalization policies of illicit economies are unlikely on their own to substantially reduce levels of criminality or to eliminate organized crime. Effective police reform, for several decades largely elusive in Latin America, is one of the most urgently needed policy reforms in the region. Such efforts need to be coupled with fundamental judicial and correctional systems reforms. Yet, regional approaches cannot obliterate the so-called balloon effect. If demand persists, even under intense law enforcement pressures, illicit economies will relocate to areas of weakest law enforcement, but they will not be eliminated.
Resumo:
In support of research in the debate concerning its relevance to hospitality academics and practitioners, the author presents a discussion of how the philosophy of science impacts approaches to research, including a brief summary of empiricism, and the importance of the triangulation of research orientations. Criticism of research is the hospitality literature often focuses on the lack of an apparent philosophy of science perspective and how this perspective impacts the way in which scholars conduct and interpret research. The Validity Network Schema (VNS) presents a triangulation model for evaluating research progress in a discipline by providing a mechanism for integrating academic and practitioner research studies.
Resumo:
Over the past five years, XML has been embraced by both the research and industrial community due to its promising prospects as a new data representation and exchange format on the Internet. The widespread popularity of XML creates an increasing need to store XML data in persistent storage systems and to enable sophisticated XML queries over the data. The currently available approaches to addressing the XML storage and retrieval issue have the limitations of either being not mature enough (e.g. native approaches) or causing inflexibility, a lot of fragmentation and excessive join operations (e.g. non-native approaches such as the relational database approach). ^ In this dissertation, I studied the issue of storing and retrieving XML data using the Semantic Binary Object-Oriented Database System (Sem-ODB) to leverage the advanced Sem-ODB technology with the emerging XML data model. First, a meta-schema based approach was implemented to address the data model mismatch issue that is inherent in the non-native approaches. The meta-schema based approach captures the meta-data of both Document Type Definitions (DTDs) and Sem-ODB Semantic Schemas, thus enables a dynamic and flexible mapping scheme. Second, a formal framework was presented to ensure precise and concise mappings. In this framework, both schemas and the conversions between them are formally defined and described. Third, after major features of an XML query language, XQuery, were analyzed, a high-level XQuery to Semantic SQL (Sem-SQL) query translation scheme was described. This translation scheme takes advantage of the navigation-oriented query paradigm of the Sem-SQL, thus avoids the excessive join problem of relational approaches. Finally, the modeling capability of the Semantic Binary Object-Oriented Data Model (Sem-ODM) was explored from the perspective of conceptually modeling an XML Schema using a Semantic Schema. ^ It was revealed that the advanced features of the Sem-ODB, such as multi-valued attributes, surrogates, the navigation-oriented query paradigm, among others, are indeed beneficial in coping with the XML storage and retrieval issue using a non-XML approach. Furthermore, extensions to the Sem-ODB to make it work more effectively with XML data were also proposed. ^
Resumo:
The Three-Layer distributed mediation architecture, designed by Secure System Architecture laboratory, employed a layered framework of presence, integration, and homogenization mediators. The architecture does not have any central component that may affect the system reliability. A distributed search technique was adapted in the system to increase its reliability. An Enhanced Chord-like algorithm (E-Chord) was designed and deployed in the integration layer. The E-Chord is a skip-list algorithm based on Distributed Hash Table (DHT) which is a distributed but structured architecture. DHT is distributed in the sense that no central unit is required to maintain indexes, and it is structured in the sense that indexes are distributed over the nodes in a systematic manner. Each node maintains three kind of routing information: a frequency list, a successor/predecessor list, and a finger table. None of the nodes in the system maintains all indexes, and each node knows about some other nodes in the system. These nodes, also called composer mediators, were connected in a P2P fashion. ^ A special composer mediator called a global mediator initiates the keyword-based matching decomposition of the request using the E-Chord. It generates an Integrated Data Structure Graph (IDSG) on the fly, creates association and dependency relations between nodes in the IDSG, and then generates a Global IDSG (GIDSG). The GIDSG graph is a plan which guides the global mediator how to integrate data. It is also used to stream data from the mediators in the homogenization layer which connected to the data sources. The connectors start sending the data to the global mediator just after the global mediator creates the GIDSG and just before the global mediator sends the answer to the presence mediator. Using the E-Chord and GIDSG made the mediation system more scalable than using a central global schema repository since all the composers in the integration layer are capable of handling and routing requests. Also, when a composer fails, it would only minimally affect the entire mediation system. ^