980 resultados para Algorithm Analysis and Problem Complexity


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the problem of privacy-preserving data publishing for social network. Research on protecting the privacy of individuals and the confidentiality of data in social network has recently been receiving increasing attention. Privacy is an important issue when one wants to make use of data that involves individuals' sensitive information, especially in a time when data collection is becoming easier and sophisticated data mining techniques are becoming more efficient. In this paper, we discuss various privacy attack vectors on social networks. We present algorithms that sanitize data to make it safe for release while preserving useful information, and discuss ways of analyzing the sanitized data. This study provides a summary of the current state-of-the-art, based on which we expect to see advances in social networks data publishing for years to come.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Requirements engineering (RE) often entails interdisciplinary groups of people working together to find novel and valuable solutions to a complex design problem. In such situations RE requires creativity in a form where interactions among stakeholders are particularly important: collaborative creativity. However, few studies have explicitly concentrated on understanding collaborative creativity in RE, resulting in limited advice for practitioners on how to support this aspect of RE. This paper provides a framework of factors characterising collaborative creative processes in RE. These factors enable a systematic investigation of the collaboratively creative nature of RE. They can potentially guide practitioners when facilitating RE efforts, and also provide researchers with ideas on where to focus when developing methods and tools for RE. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The financial crisis and Great Recession have been followed by a jobs shortage crisis that most forecasts predict will persist for years given current policies. This paper argues for a wage-led recovery and growth program which is the only way to remedy the deep causes of the crisis and escape the jobs crisis. Such a program is the polar opposite of the current policy orthodoxy, showing how much is at stake. Winning the argument for wage-led recovery will require winning the war of ideas about economics that has its roots going back to Keynes’ challenge of classical macroeconomics in the 1920s and 1930s. That will involve showing how the financial crisis and Great Recession were the ultimate result of three decades of neoliberal policy, which produced wage stagnation by severing the wage productivity growth link and made asset price inflation and debt the engine of demand growth in place of wages; showing how wage-led policy resolves the current problem of global demand shortage without pricing out labor; and developing a detailed set of policy proposals that flow from these understandings. The essence of a wage-led policy approach is to rebuild the link between wages and productivity growth, combined with expansionary macroeconomic policy that fills the current demand shortfall so as to push the economy on to a recovery path. Both sets of measures are necessary. Expansionary macro policy (i.e. fiscal stimulus and easy monetary policy) without rebuilding the wage mechanism will not produce sustainable recovery and may end in fiscal crisis. Rebuilding the wage mechanism without expansionary macro policy is likely to leave the economy stuck in the orbit of stagnation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In most cases, the cost of a control system increases based on its complexity. Proportional (P) controller is the simplest and most intuitive structure for the implementation of linear control systems. The difficulty to find the stability range of feedback systems with P controllers, using the Routh-Hurwitz criterion, increases with the order of the plant. For high order plants, the stability range cannot be easily obtained from the investigation of the coefficient signs in the first column of the Routh's array. A direct method for the determination of the stability range is presented. The method is easy to understand, to compute, and to offer the students a better comprehension on this subject. A program in MATLAB language, based on the proposed method, design examples, and class assessments, is provided in order to help the pedagogical issues. The method and the program enable the user to specify a decay rate and also extend to proportional-integral (PI), proportional-derivative (PD), and proportional-integral-derivative (PID) controllers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to provide information on lubricant contamination by biodiesel using vibration and neural network.Design/methodology/approach - The possible contamination of lubricants is verified by analyzing the vibration and neural network of a bench test under determinated conditions.Findings - Results have shown that classical signal analysis methods could not reveal any correlation between the signal and the presence of contamination, or contamination grade. on other hand, the use of probabilistic neural network (PNN) was very successful in the identification and classification of contamination and its grade.Research limitations/implications - This study was done for some specific kinds of biodiesel. Other types of biodiesel could be analyzed.Practical implications Contamination information is presented in the vibration signal, even if it is not evident by classical vibration analysis. In addition, the use of PNN gives a relatively simple and easy-to-use detection tool with good confidence. The training process is fast, and allows implementation of an adaptive training algorithm.Originality/value - This research could be extended to an internal combustion engine in order to verify a possible contamination by biodiesel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To contribute to our understanding of the genome complexity of sugarcane, we undertook a large-scale expressed sequence tag (EST),program. More than 260,000 cDNA clones were partially sequenced from 26 standard cDNA libraries generated from different sugarcane tissues. After the processing of the sequences, 237,954 high-quality ESTs were identified. These ESTs were assembled into 43,141 putative transcripts. of the assembled sequences, 35.6% presented no matches with existing sequences in public databases. A global analysis of the whole SUCEST data set indicated that 14,409 assembled sequences (33% of the total) contained at least one cDNA clone with a full-length insert. Annotation of the 43,141 assembled sequences associated almost 50% of the putative identified sugarcane genes with protein metabolism, cellular communication/signal transduction, bioenergetics, and stress responses. Inspection of the translated assembled sequences for conserved protein domains revealed 40,821 amino acid sequences with 1415 Pfam domains. Reassembling the consensus sequences of the 43,141 transcripts revealed a 22% redundancy in the first assembling. This indicated that possibly 33,620 unique genes had been identified and indicated that >90% of the sugarcane expressed genes were tagged.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper adresses the problem on processing biological data such as cardiac beats, audio and ultrasonic range, calculating wavelet coefficients in real time, with processor clock running at frequency of present ASIC's and FPGA. The Paralell Filter Architecture for DWT has been improved, calculating wavelet coefficients in real time with hardware reduced to 60%. The new architecture, which also processes IDWT, is implemented with the Radix-2 or the Booth-Wallace Constant multipliers. Including series memory register banks, one integrated circuit Signal Analyzer, ultrasonic range, is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dosage and frequency of treatment schedules are important for successful chemotherapy. However, in this work we argue that cell-kill response and tumoral growth should not be seen as separate and therefore are essential in a mathematical cancer model. This paper presents a mathematical model for sequencing of cancer chemotherapy and surgery. Our purpose is to investigate treatments for large human tumours considering a suitable cell-kill dynamics. We use some biological and pharmacological data in a numerical approach, where drug administration occurs in cycles (periodic infusion) and surgery is performed instantaneously. Moreover, we also present an analysis of stability for a chemotherapeutic model with continuous drug administration. According to Norton & Simon [22], our results indicate that chemotherapy is less eficient in treating tumours that have reached a plateau level of growing and that a combination with surgical treatment can provide better outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present paper solves the multi-level capacitated lot sizing problem with backlogging (MLCLSPB) combining a genetic algorithm with the solution of mixed-integer programming models and the improvement heuristic fix and optimize. This approach is evaluated over sets of benchmark instances and compared to methods from literature. Computational results indicate competitive results applying the proposed method when compared with other literature approaches. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mashups are becoming increasingly popular as end users are able to easily access, manipulate, and compose data from several web sources. To support end users, communities are forming around mashup development environments that facilitate sharing code and knowledge. We have observed, however, that end user mashups tend to suffer from several deficiencies, such as inoperable components or references to invalid data sources, and that those deficiencies are often propagated through the rampant reuse in these end user communities. In this work, we identify and specify ten code smells indicative of deficiencies we observed in a sample of 8,051 pipe-like web mashups developed by thousands of end users in the popular Yahoo! Pipes environment. We show through an empirical study that end users generally prefer pipes that lack those smells, and then present eleven specialized refactorings that we designed to target and remove the smells. Our refactorings reduce the complexity of pipes, increase their abstraction, update broken sources of data and dated components, and standardize pipes to fit the community development patterns. Our assessment on the sample of mashups shows that smells are present in 81% of the pipes, and that the proposed refactorings can reduce that number to 16%, illustrating the potential of refactoring to support thousands of end users developing pipe-like mashups.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Methods from statistical physics, such as those involving complex networks, have been increasingly used in the quantitative analysis of linguistic phenomena. In this paper, we represented pieces of text with different levels of simplification in co-occurrence networks and found that topological regularity correlated negatively with textual complexity. Furthermore, in less complex texts the distance between concepts, represented as nodes, tended to decrease. The complex networks metrics were treated with multivariate pattern recognition techniques, which allowed us to distinguish between original texts and their simplified versions. For each original text, two simplified versions were generated manually with increasing number of simplification operations. As expected, distinction was easier for the strongly simplified versions, where the most relevant metrics were node strength, shortest paths and diversity. Also, the discrimination of complex texts was improved with higher hierarchical network metrics, thus pointing to the usefulness of considering wider contexts around the concepts. Though the accuracy rate in the distinction was not as high as in methods using deep linguistic knowledge, the complex network approach is still useful for a rapid screening of texts whenever assessing complexity is essential to guarantee accessibility to readers with limited reading ability. Copyright (c) EPLA, 2012

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: In normal aging, the decrease in the syntactic complexity of written production is usually associated with cognitive deficits. This study was aimed to analyze the quality of older adults' textual production indicated by verbal fluency (number of words) and grammatical complexity (number of ideas) in relation to gender, age, schooling, and cognitive status. Methods: From a probabilistic sample of community-dwelling people aged 65 years and above (n = 900), 577 were selected on basis of their responses to the Mini-Mental State Examination (MMSE) sentence writing, which were submitted to content analysis; 323 were excluded as they left the item blank or performed illegible or not meaningful responses. Education adjusted cut-off scores for the MMSE were used to classify the participants as cognitively impaired or unimpaired. Total and subdomain MMSE scores were computed. Results: 40.56% of participants whose answers to the MMSE sentence were excluded from the analyses had cognitive impairment compared to 13.86% among those whose answers were included. The excluded participants were older and less educated. Women and those older than 80 years had the lowest scores in the MMSE. There was no statistically significant relationship between gender, age, schooling, and textual performance. There was a modest but significant correlation between number of words written and the scores in the Language subdomain. Conclusions: Results suggest the strong influence of schooling and age over MMSE sentence performance. Failing to write a sentence may suggest cognitive impairment, yet, instructions for the MMSE sentence, i.e. to produce a simple sentence, may limit its clinical interpretation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis consists of three independent parts. Part I: Polynomial amoebas We study the amoeba of a polynomial, as de ned by Gelfand, Kapranov and Zelevinsky. A central role in the treatment is played by a certain convex function which is linear in each complement component of the amoeba, which we call the Ronkin function. This function is used in two di erent ways. First, we use it to construct a polyhedral complex, which we call a spine, approximating the amoeba. Second, the Monge-Ampere measure of the Ronkin function has interesting properties which we explore. This measure can be used to derive an upper bound on the area of an amoeba in two dimensions. We also obtain results on the number of complement components of an amoeba, and consider possible extensions of the theory to varieties of codimension higher than 1. Part II: Differential equations in the complex plane We consider polynomials in one complex variable arising as eigenfunctions of certain differential operators, and obtain results on the distribution of their zeros. We show that in the limit when the degree of the polynomial approaches innity, its zeros are distributed according to a certain probability measure. This measure has its support on the union of nitely many curve segments, and can be characterized by a simple condition on its Cauchy transform. Part III: Radon transforms and tomography This part is concerned with different weighted Radon transforms in two dimensions, in particular the problem of inverting such transforms. We obtain stability results of this inverse problem for rather general classes of weights, including weights of attenuation type with data acquisition limited to a 180 degrees range of angles. We also derive an inversion formula for the exponential Radon transform, with the same restriction on the angle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of Concurrency Theory to Systems Biology is in its earliest stage of progress. The metaphor of cells as computing systems by Regev and Shapiro opened the employment of concurrent languages for the modelling of biological systems. Their peculiar characteristics led to the design of many bio-inspired formalisms which achieve higher faithfulness and specificity. In this thesis we present pi@, an extremely simple and conservative extension of the pi-calculus representing a keystone in this respect, thanks to its expressiveness capabilities. The pi@ calculus is obtained by the addition of polyadic synchronisation and priority to the pi-calculus, in order to achieve compartment semantics and atomicity of complex operations respectively. In its direct application to biological modelling, the stochastic variant of the calculus, Spi@, is shown able to model consistently several phenomena such as formation of molecular complexes, hierarchical subdivision of the system into compartments, inter-compartment reactions, dynamic reorganisation of compartment structure consistent with volume variation. The pivotal role of pi@ is evidenced by its capability of encoding in a compositional way several bio-inspired formalisms, so that it represents the optimal core of a framework for the analysis and implementation of bio-inspired languages. In this respect, the encodings of BioAmbients, Brane Calculi and a variant of P Systems in pi@ are formalised. The conciseness of their translation in pi@ allows their indirect comparison by means of their encodings. Furthermore it provides a ready-to-run implementation of minimal effort whose correctness is granted by the correctness of the respective encoding functions. Further important results of general validity are stated on the expressive power of priority. Several impossibility results are described, which clearly state the superior expressiveness of prioritised languages and the problems arising in the attempt of providing their parallel implementation. To this aim, a new setting in distributed computing (the last man standing problem) is singled out and exploited to prove the impossibility of providing a purely parallel implementation of priority by means of point-to-point or broadcast communication.