934 resultados para Computer Science(all)


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Most research in the area of emotion detection in written text focused on detecting explicit expressions of emotions in text. In this paper, we present a rule-based pipeline approach for detecting implicit emotions in written text without emotion-bearing words based on the OCC Model. We have evaluated our approach on three different datasets with five emotion categories. Our results show that the proposed approach outperforms the lexicon matching method consistently across all the three datasets by a large margin of 17–30% in F-measure and gives competitive performance compared to a supervised classifier. In particular, when dealing with formal text which follows grammatical rules strictly, our approach gives an average F-measure of 82.7% on “Happy”, “Angry-Disgust” and “Sad”, even outperforming the supervised baseline by nearly 17% in F-measure. Our preliminary results show the feasibility of the approach for the task of implicit emotion detection in written text.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Heuristics, simulation, artificial intelligence techniques and combinations thereof have all been employed in the attempt to make computer systems adaptive, context-aware, reconfigurable and self-managing. This paper complements such efforts by exploring the possibility to achieve runtime adaptiveness using mathematically-based techniques from the area of formal methods. It is argued that formal methods @ runtime represents a feasible approach, and promising preliminary results are summarised to support this viewpoint. The survey of existing approaches to employing formal methods at runtime is accompanied by a discussion of their challenges and of the future research required to overcome them. © 2011 Springer-Verlag.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A cikk alapvető kérdése: vajon tudomány-e a közgazdaságtan, és ha igen, akkor tekinthető-e önálló tudománynak. A választ az elmúlt század legfontosabb eredményeiből kiindulva keresi. A szerző arra a következtetésre jut, hogy napjaink főáramú közgazdasági elméletei nagyrészt Ramsey, Neumann és Haavelmo munkáira vezethetők vissza. Tudománnyá válását nagyban elősegítette a matematika és a természettudományok, főleg a fizika eredményeinek alkalmazása. Mindezt Roy E. Weintraub úgynevezett történeti-rekonstrukciós módszerével és Lakatos Imre racionális rekonstrukciója segítségével mutatja meg. / === / The fundamental question of this article is whether economics is a science, and if so, then can it be viewed as an independent science? The search for an answer begins with the most important economic results of the last century. The author comes to the conclusion that the mainstream economic theories of our times can be traced back to the works of Ramsey, Neumann and Haavelmo. e results of mathematics and natural sciences, especially physics greatly contributed to its emergence as a science. All this is proven by means of Roy E. Weintraub’s so called historical reconstruction and Imre Lakatos’ rational reconstruction methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A cikk alapvető kérdése: vajon tudomány-e a közgazdaságtan, és ha igen, akkor tekinthető-e önálló tudománynak? A választ az elmúlt század legfontosabb közgazdasági eredményeiből kiindulva keresi. A szerző arra a következtetésre jut, hogy napjaink főáramú közgazdasági elméletei nagyrészt Ramsey, Neumann és Haavelmo munkáira vezethetők vissza. Tudománnyá válását nagyban elősegítette a matematika és a természettudományok, főleg a fizika eredményeinek alkalmazása. Mindezt Roy E. Weintraub ún. történeti-rekontrukciós módszerével és Lakatos Imre racionális rekonstrukciója segítségével mutatja meg. ______________ The fundamental question of this article is: wether the economics is science, and if it is, then can it be viewed as an independent science? The answer is looked for starting from the most important economic results of the last century. The author came to the conclusion that the mainstream economic theories of our days can be traced back to the works of Ramsey, Neumann and Haavelmo. The results of mathematics and natural sciences, especially physics greatly contributed to that it became science. All these are proven by means of Roy E. Weintraub’s so called historical reconstruction and Imre Lakatos’ rational reconstruction methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A szerzők cikkükben a számítástechnikai hulladékokkal foglalkoznak, számítástechnikai eszközök alatt a számítógép konfigurációk összetevőit értik, tehát számítógépeket (asztali, hordozható, terminál stb.), és perifériáit (monitor, nyomtató, cd-író stb.), valamint ezek alkatrészeit és kiegészítőit (chipek, mechanikus részek, festékkazetták stb.). A rendszeres használat környezeti hatásait csak abból a szempontból vizsgálták, hogy ennek során bizonyos alkatrészek, kellékek (kiemelten a nyomtatók festékkazettái) a gépnél nagyobb gyakorisággal cserélődnek, s válhatnak hulladékká. A fő fókusz a számítástechnikai eszközök élettartamának vége, s ebből a szempontból kulcsfogalom a használt személyi számítógép kategória. _____ In their article, the authors discuss the issue of computer waste; under the category of information technology devices they understand the components of computer configurations, that is computers (desktop, portable, terminal etc.) and their peripheries (monitor, printer, CD writer, etc), and also the components and supplements of these (chips, mechanical parts, toner cartridges, etc.). The environmental impact of regular use was examined only from one aspect: during regular use certain components and accessories (especially the toner cartridges of printers) are more often changed and become waste. The main focus is the end of the life time of computer devices, and from this point of view used personal computers are a key concept.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nowadays, the scientific and social significance of the research of climatic effects has become outstanding. In order to be able to predict the ecological effects of the global climate change, it is necessary to study monitoring databases of the past and explore connections. For the case study mentioned in the title, historical weather data series from the Hungarian Meteorological Service and Szaniszló Priszter’s monitoring data on the phenology of geophytes have been used. These data describe on which days the observed geophytes budded, were blooming and withered. In our research we have found that the classification of the observed years according to phenological events and the classification of those according to the frequency distribution of meteorological parameters show similar patterns, and the one variable group is suitable for explaining the pattern shown by the other one. Furthermore, our important result is that the dates of all three observed phenophases correlate significantly with the average of the daily temperature fluctuation in the given period. The second most often significant parameter is the number of frosty days, this also seem to be determinant for all phenophases. Usual approaches based on the temperature sum and the average temperature don’t seem to be really important in this respect. According to the results of the research, it has turned out that the phenology of geophytes can be well modelled with the linear combination of suitable meteorological parameters

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Because some Web users will be able to design a template to visualize information from scratch, while other users need to automatically visualize information by changing some parameters, providing different levels of customization of the information is a desirable goal. Our system allows the automatic generation of visualizations given the semantics of the data, and the static or pre-specified visualization by creating an interface language. We address information visualization taking into consideration the Web, where the presentation of the retrieved information is a challenge. ^ We provide a model to narrow the gap between the user's way of expressing queries and database manipulation languages (SQL) without changing the system itself thus improving the query specification process. We develop a Web interface model that is integrated with the HTML language to create a powerful language that facilitates the construction of Web-based database reports. ^ As opposed to other papers, this model offers a new way of exploring databases focusing on providing Web connectivity to databases with minimal or no result buffering, formatting, or extra programming. We describe how to easily connect the database to the Web. In addition, we offer an enhanced way on viewing and exploring the contents of a database, allowing users to customize their views depending on the contents and the structure of the data. Current database front-ends typically attempt to display the database objects in a flat view making it difficult for users to grasp the contents and the structure of their result. Our model narrows the gap between databases and the Web. ^ The overall objective of this research is to construct a model that accesses different databases easily across the net and generates SQL, forms, and reports across all platforms without requiring the developer to code a complex application. This increases the speed of development. In addition, using only the Web browsers, the end-user can retrieve data from databases remotely to make necessary modifications and manipulations of data using the Web formatted forms and reports, independent of the platform, without having to open different applications, or learn to use anything but their Web browser. We introduce a strategic method to generate and construct SQL queries, enabling inexperienced users that are not well exposed to the SQL world to build syntactically and semantically a valid SQL query and to understand the retrieved data. The generated SQL query will be validated against the database schema to ensure harmless and efficient SQL execution. (Abstract shortened by UMI.)^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis chronicles the design and implementation of a Internet/Intranet and database based application for the quality control of hurricane surface wind observations. A quality control session consists of selecting desired observation types to be viewed and determining a storm track based time window for viewing the data. All observations of the selected types are then plotted in a storm relative view for the chosen time window and geography is positioned for the storm-center time about which an objective analysis can be performed. Users then make decisions about data validity through visual nearest-neighbor comparison and inspection. The project employed an Object Oriented iterative development method from beginning to end and its implementation primarily features the Java programming language. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An implementation of Sem-ODB—a database management system based on the Semantic Binary Model is presented. A metaschema of Sem-ODB database as well as the top-level architecture of the database engine is defined. A new benchmarking technique is proposed which allows databases built on different database models to compete fairly. This technique is applied to show that Sem-ODB has excellent efficiency comparing to a relational database on a certain class of database applications. A new semantic benchmark is designed which allows evaluation of the performance of the features characteristic of semantic database applications. An application used in the benchmark represents a class of problems requiring databases with sparse data, complex inheritances and many-to-many relations. Such databases can be naturally accommodated by semantic model. A fixed predefined implementation is not enforced allowing the database designer to choose the most efficient structures available in the DBMS tested. The results of the benchmark are analyzed. ^ A new high-level querying model for semantic databases is defined. It is proven adequate to serve as an efficient native semantic database interface, and has several advantages over the existing interfaces. It is optimizable and parallelizable, supports the definition of semantic userviews and the interoperability of semantic databases with other data sources such as World Wide Web, relational, and object-oriented databases. The query is structured as a semantic database schema graph with interlinking conditionals. The query result is a mini-database, accessible in the same way as the original database. The paradigm supports and utilizes the rich semantics and inherent ergonomics of semantic databases. ^ The analysis and high-level design of a system that exploits the superiority of the Semantic Database Model to other data models in expressive power and ease of use to allow uniform access to heterogeneous data sources such as semantic databases, relational databases, web sites, ASCII files, and others via a common query interface is presented. The Sem-ODB engine is used to control all the data sources combined under a unified semantic schema. A particular application of the system to provide an ODBC interface to the WWW as a data source is discussed. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The virtual quadrilateral is the coalescence of novel data structures that reduces the storage requirements of spatial data without jeopardizing the quality and operability of the inherent information. The data representative of the observed area is parsed to ascertain the necessary contiguous measures that, when contained, implicitly define a quadrilateral. The virtual quadrilateral then represents a geolocated area of the observed space where all of the measures are the same. The area, contoured as a rectangle, is pseudo-delimited by the opposite coordinates of the bounding area. Once defined, the virtual quadrilateral is representative of an area in the observed space and is represented in a database by the attributes of its bounding coordinates and measure of its contiguous space. Virtual quadrilaterals have been found to ensure a lossless reduction of the physical storage, maintain the implied features of the data, facilitate the rapid retrieval of vast amount of the represented spatial data and accommodate complex queries. The methods presented herein demonstrate that virtual quadrilaterals are created quite easily, are stable and versatile objects in a database and have proven to be beneficial to exigent spatial data applications such as geographic information systems. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mediation techniques provide interoperability and support integrated query processing among heterogeneous databases. While such techniques help data sharing among different sources, they increase the risk for data security, such as violating access control rules. Successful protection of information by an effective access control mechanism is a basic requirement for interoperation among heterogeneous data sources. ^ This dissertation first identified the challenges in the mediation system in order to achieve both interoperability and security in the interconnected and collaborative computing environment, which includes: (1) context-awareness, (2) semantic heterogeneity, and (3) multiple security policy specification. Currently few existing approaches address all three security challenges in mediation system. This dissertation provides a modeling and architectural solution to the problem of mediation security that addresses the aforementioned security challenges. A context-aware flexible authorization framework was developed in the dissertation to deal with security challenges faced by mediation system. The authorization framework consists of two major tasks, specifying security policies and enforcing security policies. Firstly, the security policy specification provides a generic and extensible method to model the security policies with respect to the challenges posed by the mediation system. The security policies in this study are specified by 5-tuples followed by a series of authorization constraints, which are identified based on the relationship of the different security components in the mediation system. Two essential features of mediation systems, i. e., relationship among authorization components and interoperability among heterogeneous data sources, are the focus of this investigation. Secondly, this dissertation supports effective access control on mediation systems while providing uniform access for heterogeneous data sources. The dynamic security constraints are handled in the authorization phase instead of the authentication phase, thus the maintenance cost of security specification can be reduced compared with related solutions. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

If we classify variables in a program into various security levels, then a secure information flow analysis aims to verify statically that information in a program can flow only in ways consistent with the specified security levels. One well-studied approach is to formulate the rules of the secure information flow analysis as a type system. A major trend of recent research focuses on how to accommodate various sophisticated modern language features. However, this approach often leads to overly complicated and restrictive type systems, making them unfit for practical use. Also, problems essential to practical use, such as type inference and error reporting, have received little attention. This dissertation identified and solved major theoretical and practical hurdles to the application of secure information flow. ^ We adopted a minimalist approach to designing our language to ensure a simple lenient type system. We started out with a small simple imperative language and only added features that we deemed most important for practical use. One language feature we addressed is arrays. Due to the various leaking channels associated with array operations, arrays have received complicated and restrictive typing rules in other secure languages. We presented a novel approach for lenient array operations, which lead to simple and lenient typing of arrays. ^ Type inference is necessary because usually a user is only concerned with the security types for input/output variables of a program and would like to have all types for auxiliary variables inferred automatically. We presented a type inference algorithm B and proved its soundness and completeness. Moreover, algorithm B stays close to the program and the type system and therefore facilitates informative error reporting that is generated in a cascading fashion. Algorithm B and error reporting have been implemented and tested. ^ Lastly, we presented a novel framework for developing applications that ensure user information privacy. In this framework, core computations are defined as code modules that involve input/output data from multiple parties. Incrementally, secure flow policies are refined based on feedback from the type checking/inference. Core computations only interact with code modules from involved parties through well-defined interfaces. All code modules are digitally signed to ensure their authenticity and integrity. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software architecture is the abstract design of a software system. It plays a key role as a bridge between requirements and implementation, and is a blueprint for development. The architecture represents a set of early design decisions that are crucial to a system. Mistakes in those decisions are very costly if they remain undetected until the system is implemented and deployed. This is where formal specification and analysis fits in. Formal specification makes sure that an architecture design is represented in a rigorous and unambiguous way. Furthermore, a formally specified model allows the use of different analysis techniques for verifying the correctness of those crucial design decisions. ^ This dissertation presented a framework, called SAM, for formal specification and analysis of software architectures. In terms of specification, formalisms and mechanisms were identified and chosen to specify software architecture based on different analysis needs. Formalisms for specifying properties were also explored, especially in the case of non-functional properties. In terms of analysis, the dissertation explored both the verification of functional properties and the evaluation of non-functional properties of software architecture. For the verification of functional property, methodologies were presented on how to apply existing model checking techniques on a SAM model. For the evaluation of non-functional properties, the dissertation first showed how to incorporate stochastic information into a SAM model, and then explained how to translate the model to existing tools and conducts the analysis using those tools. ^ To alleviate the analysis work, we also provided a tool to automatically translate a SAM model for model checking. All the techniques and methods described in the dissertation were illustrated by examples or case studies, which also served a purpose of advocating the use of formal methods in practice. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The need to provide computers with the ability to distinguish the affective state of their users is a major requirement for the practical implementation of affective computing concepts. This dissertation proposes the application of signal processing methods on physiological signals to extract from them features that can be processed by learning pattern recognition systems to provide cues about a person's affective state. In particular, combining physiological information sensed from a user's left hand in a non-invasive way with the pupil diameter information from an eye-tracking system may provide a computer with an awareness of its user's affective responses in the course of human-computer interactions. In this study an integrated hardware-software setup was developed to achieve automatic assessment of the affective status of a computer user. A computer-based "Paced Stroop Test" was designed as a stimulus to elicit emotional stress in the subject during the experiment. Four signals: the Galvanic Skin Response (GSR), the Blood Volume Pulse (BVP), the Skin Temperature (ST) and the Pupil Diameter (PD), were monitored and analyzed to differentiate affective states in the user. Several signal processing techniques were applied on the collected signals to extract their most relevant features. These features were analyzed with learning classification systems, to accomplish the affective state identification. Three learning algorithms: Naïve Bayes, Decision Tree and Support Vector Machine were applied to this identification process and their levels of classification accuracy were compared. The results achieved indicate that the physiological signals monitored do, in fact, have a strong correlation with the changes in the emotional states of the experimental subjects. These results also revealed that the inclusion of pupil diameter information significantly improved the performance of the emotion recognition system. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The microarray technology provides a high-throughput technique to study gene expression. Microarrays can help us diagnose different types of cancers, understand biological processes, assess host responses to drugs and pathogens, find markers for specific diseases, and much more. Microarray experiments generate large amounts of data. Thus, effective data processing and analysis are critical for making reliable inferences from the data. ^ The first part of dissertation addresses the problem of finding an optimal set of genes (biomarkers) to classify a set of samples as diseased or normal. Three statistical gene selection methods (GS, GS-NR, and GS-PCA) were developed to identify a set of genes that best differentiate between samples. A comparative study on different classification tools was performed and the best combinations of gene selection and classifiers for multi-class cancer classification were identified. For most of the benchmarking cancer data sets, the gene selection method proposed in this dissertation, GS, outperformed other gene selection methods. The classifiers based on Random Forests, neural network ensembles, and K-nearest neighbor (KNN) showed consistently god performance. A striking commonality among these classifiers is that they all use a committee-based approach, suggesting that ensemble classification methods are superior. ^ The same biological problem may be studied at different research labs and/or performed using different lab protocols or samples. In such situations, it is important to combine results from these efforts. The second part of the dissertation addresses the problem of pooling the results from different independent experiments to obtain improved results. Four statistical pooling techniques (Fisher inverse chi-square method, Logit method. Stouffer's Z transform method, and Liptak-Stouffer weighted Z-method) were investigated in this dissertation. These pooling techniques were applied to the problem of identifying cell cycle-regulated genes in two different yeast species. As a result, improved sets of cell cycle-regulated genes were identified. The last part of dissertation explores the effectiveness of wavelet data transforms for the task of clustering. Discrete wavelet transforms, with an appropriate choice of wavelet bases, were shown to be effective in producing clusters that were biologically more meaningful. ^