993 resultados para Software Complexity
Resumo:
As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.
Resumo:
Security defects are common in large software systems because of their size and complexity. Although efficient development processes, testing, and maintenance policies are applied to software systems, there are still a large number of vulnerabilities that can remain, despite these measures. Some vulnerabilities stay in a system from one release to the next one because they cannot be easily reproduced through testing. These vulnerabilities endanger the security of the systems. We propose vulnerability classification and prediction frameworks based on vulnerability reproducibility. The frameworks are effective to identify the types and locations of vulnerabilities in the earlier stage, and improve the security of software in the next versions (referred to as releases). We expand an existing concept of software bug classification to vulnerability classification (easily reproducible and hard to reproduce) to develop a classification framework for differentiating between these vulnerabilities based on code fixes and textual reports. We then investigate the potential correlations between the vulnerability categories and the classical software metrics and some other runtime environmental factors of reproducibility to develop a vulnerability prediction framework. The classification and prediction frameworks help developers adopt corresponding mitigation or elimination actions and develop appropriate test cases. Also, the vulnerability prediction framework is of great help for security experts focus their effort on the top-ranked vulnerability-prone files. As a result, the frameworks decrease the number of attacks that exploit security vulnerabilities in the next versions of the software. To build the classification and prediction frameworks, different machine learning techniques (C4.5 Decision Tree, Random Forest, Logistic Regression, and Naive Bayes) are employed. The effectiveness of the proposed frameworks is assessed based on collected software security defects of Mozilla Firefox.
Resumo:
Magnetic Resonance Imaging (MRI) is the in vivo technique most commonly employed to characterize changes in brain structures. The conventional MRI-derived morphological indices are able to capture only partial aspects of brain structural complexity. Fractal geometry and its most popular index, the fractal dimension (FD), can characterize self-similar structures including grey matter (GM) and white matter (WM). Previous literature shows the need for a definition of the so-called fractal scaling window, within which each structure manifests self-similarity. This justifies the existence of fractal properties and confirms Mandelbrot’s assertion that "fractals are not a panacea; they are not everywhere". In this work, we propose a new approach to automatically determine the fractal scaling window, computing two new fractal descriptors, i.e., the minimal and maximal fractal scales (mfs and Mfs). Our method was implemented in a software package, validated on phantoms and applied on large datasets of structural MR images. We demonstrated that the FD is a useful marker of morphological complexity changes that occurred during brain development and aging and, using ultra-high magnetic field (7T) examinations, we showed that the cerebral GM has fractal properties also below the spatial scale of 1 mm. We applied our methodology in two neurological diseases. We observed the reduction of the brain structural complexity in SCA2 patients and, using a machine learning approach, proved that the cerebral WM FD is a consistent feature in predicting cognitive decline in patients with small vessel disease and mild cognitive impairment. Finally, we showed that the FD of the WM skeletons derived from diffusion MRI provides complementary information to those obtained from the FD of the WM general structure in T1-weighted images. In conclusion, the fractal descriptors of structural brain complexity are candidate biomarkers to detect subtle morphological changes during development, aging and in neurological diseases.
Resumo:
Disconnectivity between the Default Mode Network (DMN) nodes can cause clinical symptoms and cognitive deficits in Alzheimer׳s disease (AD). We aimed to examine the structural connectivity between DMN nodes, to verify the extent in which white matter disconnection affects cognitive performance. MRI data of 76 subjects (25 mild AD, 21 amnestic Mild Cognitive Impairment subjects and 30 controls) were acquired on a 3.0T scanner. ExploreDTI software (fractional Anisotropy threshold=0.25 and the angular threshold=60°) calculated axial, radial, and mean diffusivities, fractional anisotropy and streamline count. AD patients showed lower fractional anisotropy (P=0.01) and streamline count (P=0.029), and higher radial diffusivity (P=0.014) than controls in the cingulum. After correction for white matter atrophy, only fractional anisotropy and radial diffusivity remained significantly lower in AD compared to controls (P=0.003 and P=0.05). In the parahippocampal bundle, AD patients had lower mean and radial diffusivities (P=0.048 and P=0.013) compared to controls, from which only radial diffusivity survived for white matter adjustment (P=0.05). Regression models revealed that cognitive performance is also accounted for by white matter microstructural values. Structural connectivity within the DMN is important to the execution of high-complexity tasks, probably due to its relevant role in the integration of the network.
Resumo:
This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
Neste trabalho são descritas as técnicas de análise estatística utilizadas e a acessibilidade estatística em uma amostra dos artigos originais publicados no período 1996-2006 em duas revistas de pesquisa na área de fruticultura: a Revista Brasileira de Fruticultura (RBF) e a revista francesa Fruits. No total foram classificados 986 artigos em 16 categorias de análise estatística, ordenadas em grau ascendente de complexidade. No período analisado, foi constatado um aumento no uso de análises mais sofisticadas ao longo do tempo em ambos as revistas. Os trabalhos publicados pela RBF aplicaram com maior freqüência técnicas estatísticas mais complexas, com maior utilização de delineamentos em blocos aleatorizados, arranjos fatoriais, parcelas subdivididas e modelos hierárquicos, e do teste de Tukey para comparações múltiplas de médias. Nos trabalhos publicados pela revista Fruits, predominou o uso de outros testes paramétricos e do teste de Duncan. O pacote estatístico SAS foi o mais utilizado nos artigos publicados em ambas as revistas. Os leitores da revista RBF precisaram de um nível de conhecimento estatístico mais elevado para ter acesso à maior parte dos artigos publicados no período, em comparação com os leitores da revista francesa.
Resumo:
During the early Holocene two main paleoamerican cultures thrived in Brazil: the Tradicao Nordeste in the semi-desertic Sertao and the Tradicao Itaparica in the high plains of the Planalto Central. Here we report on paleodietary singals of a Paleoamerican found in a third Brazilian ecological setting - a riverine shellmound, or sambaqui, located in the Atlantic forest. Most sambaquis are found along the coast. The peoples associated with them subsisted on marine resources. We are reporting a different situation from the oldest recorded riverine sambaqui, called Capelinha. Capelinha is a relatively small sambaqui established along a river 60 km from the Atlantic Ocean coast. It contained the well-preserved remains of a Paleoamerican known as Luzio dated to 9,945 +/- 235 years ago; the oldest sambaqui dweller so far. Luzio's bones were remarkably well preserved and allowed for stable isotopic analysis of diet. Although artifacts found at this riverine site show connections with the Atlantic coast, we show that he represents a population that was dependent on inland resources as opposed to marine coastal resources. After comparing Luzio's paleodietary data with that of other extant and prehistoric groups, we discuss where his group could have come from, if terrestrial diet persisted in riverine sambaquis and how Luzio fits within the discussion of the replacement of paleamerican by amerindian morphology. This study adds to the evidence that shows a greater complexity in the prehistory of the colonization of and the adaptations to the New World.
Resumo:
This paper presents SMarty, a variability management approach for UML-based software product lines (PL). SMarty is supported by a UML profile, the SMartyProfile, and a process for managing variabilities, the SMartyProcess. SMartyProfile aims at representing variabilities, variation points, and variants in UML models by applying a set of stereotypes. SMartyProcess consists of a set of activities that is systematically executed to trace, identify, and control variabilities in a PL based on SMarty. It also identifies variability implementation mechanisms and analyzes specific product configurations. In addition, a more comprehensive application of SMarty is presented using SEI's Arcade Game Maker PL. An evaluation of SMarty and related work are discussed.
Resumo:
Background: The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results: In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions: A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 <= q <= 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/.
Resumo:
Thousands of Free and Open Source Software Projects (FSP) were, and continually are, created on the Internet. This scenario increases the number of opportunities to collaborate to the same extent that it promotes competition for users and contributors, who can guide projects to superior levels, unachievable by founders alone. Thus, given that the main goal of FSP founders is to improve their projects by means of collaboration, the importance to understand and manage the capacity of attracting users and contributors to the project is established. To support researchers and founders in this challenge, the concept of attractiveness is introduced in this paper, which develops a theoretical-managerial toolkit about the causes, indicators and consequences of attractiveness, enabling its strategic management.
Resumo:
The large amount of information in electronic contracts hampers their establishment due to high complexity. An approach inspired in Software Product Line (PL) and based on feature modelling was proposed to make this process more systematic through information reuse and structuring. By assessing the feature-based approach in relation to a proposed set of requirements, it was showed that the approach does not allow the price of services and of Quality of Services (QoS) attributes to be considered in the negotiation and included in the electronic contract. Thus, this paper also presents an extension of such approach in which prices and price types associated to Web services and QoS levels are applied. An extended toolkit prototype is also presented as well as an experiment example of the proposed approach.
Resumo:
We evaluated the reliability and validity of a Brazilian-Portuguese version of the Epilepsy Medication Treatment Complexity Index (EMTCI). Interrater reliability was evaluated with the intraclass correlation coefficient (ICC), and validity was evaluated by correlation of mean EMTCI scores with the following variables: number of antiepileptic drugs (AEDs), seizure control, patients` perception of seizure control, and adherence to the therapeutic regimen as measured with the Morisky scale. We studied patients with epilepsy followed in a tertiary university-based hospital outpatient clinic setting, aged 18 years or older, independent in daily living activities, and without cognitive impairment or active psychiatric disease. ICCs ranged from 0.721 to 0.999. Mean EMTCI scores were significantly correlated with the variables assessed. Higher EMTCI scores were associated with an increasing number of AEDs, uncontrolled seizures, patients` perception of lack of seizure control, and poorer adherence to the therapeutic regimen. The results indicate that the Brazilian-Portuguese EMTCI is reliable and valid to be applied clinically in the country. The Brazilian-Portuguese EMTCI version may be a useful tool in developing strategies to minimize treatment complexity, possibly improving seizure control and quality of life in people with epilepsy in our milieu. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Aging is known to have a degrading influence on many structures and functions of the human sensorimotor system. The present work assessed aging-related changes in postural sway using fractal and complexity measures of the center of pressure (COP) dynamics with the hypothesis that complexity and fractality decreases in the older individuals. Older subjects (68 +/- 4 years) and young adult subjects (28 +/- 7 years) performed a quiet stance task (60 s) and a prolonged standing task (30 min) where subjects were allowed to move freely. Long-range correlations (fractality) of the data were estimated by the detrended fluctuation analysis (DFA); changes in entropy were estimated by the multi-scale entropy (MSE) measure. The DFA results showed that the fractal dimension was lower for the older subjects in comparison to the young adults but the fractal dimensions of both groups were not different from a 1/f noise, for time intervals between 10 and 600 s. The MSE analysis performed with the typically applied adjustment to the criterion distance showed a higher degree of complexity in the older subjects, which is inconsistent with the hypothesis that complexity in the human physiological system decreases with aging. The same MSE analysis performed without adjustment showed no differences between the groups. Taken all results together, the decrease in total postural sway and long-range correlations in older individuals are signs of an adaptation process reflecting the diminishing ability to generate adequate responses on a longer time scale.
Resumo:
The aim of this study was to investigate the effects of knowledge of results (KR) frequency and task complexity on motor skill acquisition. The task consisted of throwing a bocha ball to place it as close as possible to the target ball. 120 students ages 11 to 73 years were assigned to one of eight experimental groups according to knowledge of results frequency (25, 50, 75, and 100%) and task complexity (simple and complex). Subjects performed 90 trials in the acquisition phase and 10 trials in the transfer test. The results showed that knowledge of results given at a frequency of 25% resulted in an inferior absolute error than 50% and inferior variable error than 50, 75, and 100 I frequencies, but no effect of task complexity was found.