880 resultados para Feature ontology


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Green IT is a term that covers various tasks and concepts that are related to reducing the environmental impact of IT. At enterprise level, Green IT has significant potential to generate sustainable cost savings: the total amount of devices is growing and electricity prices are rising. The lifecycle of a computer can be made more environmentally sustainable using Green IT, e.g. by using energy efficient components and by implementing device power management. The challenge using power management at enterprise level is how to measure and follow-up the impact of power management policies? During the thesis a power management feature was developed to a configuration management system. The feature can be used to automatically power down and power on PCs using a pre-defined schedule and to estimate the total power usage of devices. Measurements indicate that using the feature the device power consumption can be monitored quite precisely and the power consumption can be reduced, which generates electricity cost savings and reduces the environmental impact of IT.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Developing software is a difficult and error-prone activity. Furthermore, the complexity of modern computer applications is significant. Hence,an organised approach to software construction is crucial. Stepwise Feature Introduction – created by R.-J. Back – is a development paradigm, in which software is constructed by adding functionality in small increments. The resulting code has an organised, layered structure and can be easily reused. Moreover, the interaction with the users of the software and the correctness concerns are essential elements of the development process, contributing to high quality and functionality of the final product. The paradigm of Stepwise Feature Introduction has been successfully applied in an academic environment, to a number of small-scale developments. The thesis examines the paradigm and its suitability to construction of large and complex software systems by focusing on the development of two software systems of significant complexity. Throughout the thesis we propose a number of improvements and modifications that should be applied to the paradigm when developing or reengineering large and complex software systems. The discussion in the thesis covers various aspects of software development that relate to Stepwise Feature Introduction. More specifically, we evaluate the paradigm based on the common practices of object-oriented programming and design and agile development methodologies. We also outline the strategy to testing systems built with the paradigm of Stepwise Feature Introduction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper a computer program to model and support product design is presented. The product is represented through a hierarchical structure that allows the user to navigate across the product’s components, and it aims at facilitating each step of the detail design process. A graphical interface was also developed, which shows visually to the user the contents of the product structure. Features are used as building blocks for the parts that compose the product, and object-oriented methodology was used as a means to implement the product structure. Finally, an expert system was also implemented, whose knowledge base rules help the user design a product that meets design and manufacturing requirements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Software plays an important role in our society and economy. Software development is an intricate process, and it comprises many different tasks: gathering requirements, designing new solutions that fulfill these requirements, as well as implementing these designs using a programming language into a working system. As a consequence, the development of high quality software is a core problem in software engineering. This thesis focuses on the validation of software designs. The issue of the analysis of designs is of great importance, since errors originating from designs may appear in the final system. It is considered economical to rectify the problems as early in the software development process as possible. Practitioners often create and visualize designs using modeling languages, one of the more popular being the Uni ed Modeling Language (UML). The analysis of the designs can be done manually, but in case of large systems, the need of mechanisms that automatically analyze these designs arises. In this thesis, we propose an automatic approach to analyze UML based designs using logic reasoners. This approach firstly proposes the translations of the UML based designs into a language understandable by reasoners in the form of logic facts, and secondly shows how to use the logic reasoners to infer the logical consequences of these logic facts. We have implemented the proposed translations in the form of a tool that can be used with any standard compliant UML modeling tool. Moreover, we authenticate the proposed approach by automatically validating hundreds of UML based designs that consist of thousands of model elements available in an online model repository. The proposed approach is limited in scope, but is fully automatic and does not require any expertise of logic languages from the user. We exemplify the proposed approach with two applications, which include the validation of domain specific languages and the validation of web service interfaces.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A growing concern for organisations is how they should deal with increasing amounts of collected data. With fierce competition and smaller margins, organisations that are able to fully realize the potential in the data they collect can gain an advantage over the competitors. It is almost impossible to avoid imprecision when processing large amounts of data. Still, many of the available information systems are not capable of handling imprecise data, even though it can offer various advantages. Expert knowledge stored as linguistic expressions is a good example of imprecise but valuable data, i.e. data that is hard to exactly pinpoint to a definitive value. There is an obvious concern among organisations on how this problem should be handled; finding new methods for processing and storing imprecise data are therefore a key issue. Additionally, it is equally important to show that tacit knowledge and imprecise data can be used with success, which encourages organisations to analyse their imprecise data. The objective of the research conducted was therefore to explore how fuzzy ontologies could facilitate the exploitation and mobilisation of tacit knowledge and imprecise data in organisational and operational decision making processes. The thesis introduces both practical and theoretical advances on how fuzzy logic, ontologies (fuzzy ontologies) and OWA operators can be utilized for different decision making problems. It is demonstrated how a fuzzy ontology can model tacit knowledge which was collected from wine connoisseurs. The approach can be generalised and applied also to other practically important problems, such as intrusion detection. Additionally, a fuzzy ontology is applied in a novel consensus model for group decision making. By combining the fuzzy ontology with Semantic Web affiliated techniques novel applications have been designed. These applications show how the mobilisation of knowledge can successfully utilize also imprecise data. An important part of decision making processes is undeniably aggregation, which in combination with a fuzzy ontology provides a promising basis for demonstrating the benefits that one can retrieve from handling imprecise data. The new aggregation operators defined in the thesis often provide new possibilities to handle imprecision and expert opinions. This is demonstrated through both theoretical examples and practical implementations. This thesis shows the benefits of utilizing all the available data one possess, including imprecise data. By combining the concept of fuzzy ontology with the Semantic Web movement, it aspires to show the corporate world and industry the benefits of embracing fuzzy ontologies and imprecision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines information security as a process (information securing) in terms of what it does, especially beyond its obvious role of protector. It investigates concepts related to ‘ontology of becoming’, and examines what it is that information securing produces. The research is theory driven and draws upon three fields: sociology (especially actor-network theory), philosophy (especially Gilles Deleuze and Félix Guattari’s concept of ‘machine’, ‘territory’ and ‘becoming’, and Michel Serres’s concept of ‘parasite’), and information systems science (the subject of information security). Social engineering (used here in the sense of breaking into systems through non-technical means) and software cracker groups (groups which remove copy protection systems from software) are analysed as examples of breaches of information security. Firstly, the study finds that information securing is always interruptive: every entity (regardless of whether or not it is malicious) that becomes connected to information security is interrupted. Furthermore, every entity changes, becomes different, as it makes a connection with information security (ontology of becoming). Moreover, information security organizes entities into different territories. However, the territories – the insides and outsides of information systems – are ontologically similar; the only difference is in the order of the territories, not in the ontological status of entities that inhabit the territories. In other words, malicious software is ontologically similar to benign software; they both are users in terms of a system. The difference is based on the order of the system and users: who uses the system and what the system is used for. Secondly, the research shows that information security is always external (in the terms of this study it is a ‘parasite’) to the information system that it protects. Information securing creates and maintains order while simultaneously disrupting the existing order of the system that it protects. For example, in terms of software itself, the implementation of a copy protection system is an entirely external addition. In fact, this parasitic addition makes software different. Thus, information security disrupts that which it is supposed to defend from disruption. Finally, it is asserted that, in its interruption, information security is a connector that creates passages; it connects users to systems while also creating its own threats. For example, copy protection systems invite crackers and information security policies entice social engineers to use and exploit information security techniques in a novel manner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Adrenocortical tumors (ACT) in children under 15 years of age exhibit some clinical and biological features distinct from ACT in adults. Cell proliferation, hypertrophy and cell death in adrenal cortex during the last months of gestation and the immediate postnatal period seem to be critical for the origin of ACT in children. Studies with large numbers of patients with childhood ACT have indicated a median age at diagnosis of about 4 years. In our institution, the median age was 3 years and 5 months, while the median age for first signs and symptoms was 2 years and 5 months (N = 72). Using the comparative genomic hybridization technique, we have reported a high frequency of 9q34 amplification in adenomas and carcinomas. This finding has been confirmed more recently by investigators in England. The lower socioeconomic status, the distinctive ethnic groups and all the regional differences in Southern Brazil in relation to patients in England indicate that these differences are not important to determine 9q34 amplification. Candidate amplified genes mapped to this locus are currently being investigated and Southern blot results obtained so far have discarded amplification of the abl oncogene. Amplification of 9q34 has not been found to be related to tumor size, staging, or malignant histopathological features, nor does it seem to be responsible for the higher incidence of ACT observed in Southern Brazil, but could be related to an ACT from embryonic origin.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a serial feature-positive conditional discrimination procedure the properties of a target stimulus A are defined by the presence or not of a feature stimulus X preceding it. In the present experiment, composite features preceded targets associated with two different topography operant responses (right and left bar pressing); matching and non-matching-to-sample arrangements were also used. Five water-deprived Wistar rats were trained in 6 different trials: X-R®Ar and X-L®Al, in which X and A were same modality visual stimuli and the reinforcement was contingent to pressing either the right (r) or left (l) bar that had the light on during the feature (matching-to-sample); Y-R®Bl and Y-L®Br, in which Y and B were same modality auditory stimuli and the reinforcement was contingent to pressing the bar that had the light off during the feature (non-matching-to-sample); A- and B- alone. After 100 training sessions, the animals were submitted to transfer tests with the targets used plus a new one (auditory click). Average percentages of stimuli with a response were measured. Acquisition occurred completely only for Y-L®Br+; however, complex associations were established along training. Transfer was not complete during the tests since concurrent effects of extinction and response generalization also occurred. Results suggest the use of both simple conditioning and configurational strategies, favoring the most recent theories of conditional discrimination learning. The implications of the use of complex arrangements for discussing these theories are considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Personalized medicine will revolutionize our capabilities to combat disease. Working toward this goal, a fundamental task is the deciphering of geneticvariants that are predictive of complex diseases. Modern studies, in the formof genome-wide association studies (GWAS) have afforded researchers with the opportunity to reveal new genotype-phenotype relationships through the extensive scanning of genetic variants. These studies typically contain over half a million genetic features for thousands of individuals. Examining this with methods other than univariate statistics is a challenging task requiring advanced algorithms that are scalable to the genome-wide level. In the future, next-generation sequencing studies (NGS) will contain an even larger number of common and rare variants. Machine learning-based feature selection algorithms have been shown to have the ability to effectively create predictive models for various genotype-phenotype relationships. This work explores the problem of selecting genetic variant subsets that are the most predictive of complex disease phenotypes through various feature selection methodologies, including filter, wrapper and embedded algorithms. The examined machine learning algorithms were demonstrated to not only be effective at predicting the disease phenotypes, but also doing so efficiently through the use of computational shortcuts. While much of the work was able to be run on high-end desktops, some work was further extended so that it could be implemented on parallel computers helping to assure that they will also scale to the NGS data sets. Further, these studies analyzed the relationships between various feature selection methods and demonstrated the need for careful testing when selecting an algorithm. It was shown that there is no universally optimal algorithm for variant selection in GWAS, but rather methodologies need to be selected based on the desired outcome, such as the number of features to be included in the prediction model. It was also demonstrated that without proper model validation, for example using nested cross-validation, the models can result in overly-optimistic prediction accuracies and decreased generalization ability. It is through the implementation and application of machine learning methods that one can extract predictive genotype–phenotype relationships and biological insights from genetic data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ontology matching is an important task when data from multiple data sources is integrated. Problems of ontology matching have been studied widely in the researchliterature and many different solutions and approaches have been proposed alsoin commercial software tools. In this survey, well-known approaches of ontologymatching, and its subtype schema matching, are reviewed and compared. The aimof this report is to summarize the knowledge about the state-of-the-art solutionsfrom the research literature, discuss how the methods work on different application domains, and analyze pros and cons of different open source and academic tools inthe commercial world.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tässä työssä testattiin partikkelikokojakaumien analysoinnissa käytettävää kuvankäsittelyohjelmaa INCA Feature. Partikkelikokojakaumat määritettiin elektronimikroskooppikuvista INCA Feature ohjelmaa käyttäen partikkeleiden projektiokuvista päällystyspigmenttinä käytettävälle talkille ja kahdelle eri karbonaattilaadulle. Lisäksi määritettiin partikkelikokojakaumat suodatuksessa ja puhdistuksessa apuaineina käytettäville piidioksidi- ja alumiinioksidihiukkasille. Kuvankäsittelyohjelmalla määritettyjä partikkelikokojakaumia verrattiin partikkelin laskeutumisnopeuteen eli sedimentaatioon perustuvalla SediGraph 5100 analysaattorilla ja laserdiffraktioon perustuvalla Coulter LS 230 menetelmällä analysoituihin partikkelikokojakaumiin. SediGraph 5100 ja kuva-analyysiohjelma antoivat talkkipartikkelien kokojakaumalle hyvin samankaltaisen keskiarvon. Sen sijaan Coulter LS 230 laitteen antama kokojakauman keskiarvo poikkesi edellisistä. Kaikki vertailussa olleet partikkelikokojakaumamenetelmät asettivat eri näytteiden partikkelit samaan kokojärjestykseen. Kuitenkaan menetelmien tuloksia ei voida numeerisesti verrata toisiinsa, sillä kaikissa käytetyissä analyysimenetelmissä partikkelikoon mittaus perustuu partikkelin eri ominaisuuteen. Työn perusteella kaikki testatut analyysimenetelmät soveltuvat paperipigmenttien partikkelikokojakaumien määrittämiseen. Tässä työssä selvitettiin myös kuva-analyysiin tarvittava partikkelien lukumäärä, jolla analyysitulos on luotettava. Työssä todettiin, että analysoitavien partikkelien lukumäärän tulee olla vähintään 300 partikkelia. Liian suuri näytemäärä lisää kokojakauman hajontaa ja pidentää analyysiin käytettyä aikaa useaan tuntiin. Näytteenkäsittely vaatii vielä lisää tutkimuksia, sillä se on tärkein ja kriittisin vaihe SEM ja kuva-analyysiohjelmalla tehtävää partikkelikokoanalyysiä. Automaattisten mikroskooppien yleistyminen helpottaa ja nopeuttaa analyysien tekoa, jolloin menetelmän suosio tulee kasvamaan myös paperipigmenttien tutkimuksessa. Laitteiden korkea hinta ja käyttäjältä vaadittava eritysosaaminen tulevat rajaamaan käytön ainakin toistaiseksi tutkimuslaitoksiin.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The text examines Sergej Nikolajeviè Bulgakov's description of the philosopheme as thoroughly "immanent" (viz., the immanence of man qua being, such that ontology in Bulgakov becomes a conceptual analogue for immanence) and the corollary that such immanence necessarily excludes the problematic of the "creation of the world." Because of this resolute immanence and the notion that the creation of the world in the form of creatio ex nihilo requires a non-immanent or non-ontological thought and concept, the problematic for Bulgakov is approached only by a theologeme. Appropriating this argument as material for a cursory philosopheme, the text attempts to transform Bulgakov's theologeme into a philosopheme through an elision of God and dogma that overdetermines the theologeme. This philosopheme (nascent within Bulgakov's work itself, in both his hesitation to the overdetermination of immanence and the commitment to the problem of creation) would be a thoroughly non-ontological philosopheme, one that allows for the treatment of the problematic of "creation" or singular ontogenesis, yet with the corollary that this philosopheme must rely on an "ontological zero" Such a philosopheme qua ontologically empty formula nevertheless remains ontologically significant insofar as it is to evince the limit of ontology, in the ontological zero's non-relationality to ontology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abstract: Nietzsche's Will-to-Power Ontology: An Interpretation of Beyond Good and Evil § 36 By: Mark Minuk Will-to-power is the central component of Nietzsche's philosophy, and passage 36 of Beyond Good and Evil is essential to coming to an understanding of it. 1 argue for and defend the thesis that will-to-power constitutes Nietzsche's ontology, and offer a new understanding of what that means. Nietzsche's ontology can be talked about as though it were a traditional substance ontology (i.e., a world made up of forces; a duality of conflicting forces described as 'towards which' and 'away from which'). However, 1 argue that what defines this ontology is an understanding of valuation as ontologically fundamental—^the basis of interpretation, and from which a substance ontology emerges. In the second chapter, I explain Nietzsche's ontology, as reflected in this passage, through a discussion of Heidegger's two ontological categories in Being and Time (readiness-to-hand, and present-at-hand). In a nutshell, it means that the world of our desires and passions (the most basic of which is for power) is ontologically more fundamental than the material world, or any other interpretation, which is to say, the material world emerges out of a world of our desires and passions. In the first chapter, I address the problematic form of the passage reflected in the first sentence. The passage is in a hypothetical style makes no claim to positive knowledge or truth, and, superficially, looks like Schopenhaurian position for the metaphysics of the will, which Nietzsche rejects. 1 argue that the hypothetical form of the passage is a matter of style, namely, the style of a free-spirit for whom the question of truth is reframed as a question of values. In the third and final chapter, 1 address the charge that Nietzsche's interpretation is a conscious anthropomorphic projection. 1 suggest that the charge rests on a distinction (between nature and man) that Nietzsche rejects. I also address the problem of the causality of the will for Nietzsche, by suggesting that an alternative, perspectival form of causality is possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A feature-based fitness function is applied in a genetic programming system to synthesize stochastic gene regulatory network models whose behaviour is defined by a time course of protein expression levels. Typically, when targeting time series data, the fitness function is based on a sum-of-errors involving the values of the fluctuating signal. While this approach is successful in many instances, its performance can deteriorate in the presence of noise. This thesis explores a fitness measure determined from a set of statistical features characterizing the time series' sequence of values, rather than the actual values themselves. Through a series of experiments involving symbolic regression with added noise and gene regulatory network models based on the stochastic 'if-calculus, it is shown to successfully target oscillating and non-oscillating signals. This practical and versatile fitness function offers an alternate approach, worthy of consideration for use in algorithms that evaluate noisy or stochastic behaviour.