856 resultados para Analysis of Algorithms and Problem Complexity
Resumo:
In this paper we present algorithms which work on pairs of 0,1- matrices which multiply again a matrix of zero and one entries. When applied over a pair, the algorithms change the number of non-zero entries present in the matrices, meanwhile their product remains unchanged. We establish the conditions under which the number of 1s decreases. We recursively define as well pairs of matrices which product is a specific matrix and such that by applying on them these algorithms, we minimize the total number of non-zero entries present in both matrices. These matrices may be interpreted as solutions for a well known information retrieval problem, and in this case the number of 1 entries represent the complexity of the retrieve and information update operations.
Resumo:
В постановочном плане рассмотрены вопросы введения понятия «пространство развития», виды возможных изменений системы, структура и механизмы развития. Рассмотрены типологии индикаторов развития, роль информационной компоненты и понятия качества.
Resumo:
A case study of an aircraft engine manufacturer is used to analyze the effects of management levers on the lead time and design errors generated in an iteration-intensive concurrent engineering process. The levers considered are amount of design-space exploration iteration, degree of process concurrency, and timing of design reviews. Simulation is used to show how the ideal combination of these levers can vary with changes in design problem complexity, which can increase, for instance, when novel technology is incorporated in a design. Results confirm that it is important to consider multiple iteration-influencing factors and their interdependencies to understand concurrent processes, because the factors can interact with confounding effects. The article also demonstrates a new approach to derive a system dynamics model from a process task network. The new approach could be applied to analyze other concurrent engineering scenarios. © The Author(s) 2012.
Resumo:
Many combinatorial problems coming from the real world may not have a clear and well defined structure, typically being dirtied by side constraints, or being composed of two or more sub-problems, usually not disjoint. Such problems are not suitable to be solved with pure approaches based on a single programming paradigm, because a paradigm that can effectively face a problem characteristic may behave inefficiently when facing other characteristics. In these cases, modelling the problem using different programming techniques, trying to ”take the best” from each technique, can produce solvers that largely dominate pure approaches. We demonstrate the effectiveness of hybridization and we discuss about different hybridization techniques by analyzing two classes of problems with particular structures, exploiting Constraint Programming and Integer Linear Programming solving tools and Algorithm Portfolios and Logic Based Benders Decomposition as integration and hybridization frameworks.
Resumo:
The compelling quality of the Global Change simulation study (Altemeyer, 2003), in which high RWA (right-wing authoritarianism)/high SDO (social dominance orientation) individuals produced poor outcomes for the planet, rests on the inference that the link between high RWA/SDO scores and disaster in the simulation can be generalized to real environmental and social situations. However, we argue that studies of the Person × Situation interaction are biased to overestimate the role of the individual variability. When variables are operationalized, strongly normative items are excluded because they are skewed and kurtotic. This occurs both in the measurement of predictor constructs, such as RWA, and in the outcome constructs, such as prejudice and war. Analyses of normal linear statistics highlight personality variables such as RWA, which produce variance, and overlook the role of norms, which produce invariance. Where both normative and personality forces are operating, as in intergroup contexts, the linear analysis generates statistics for the sample that disproportionately reflect the behavior of the deviant, antinormative minority and direct attention away from the baseline, normative position. The implications of these findings for the link between high RWA and disaster are discussed.
Resumo:
The theory of nonlinear dyamic systems provides some new methods to handle complex systems. Chaos theory offers new concepts, algorithms and methods for processing, enhancing and analyzing the measured signals. In recent years, researchers are applying the concepts from this theory to bio-signal analysis. In this work, the complex dynamics of the bio-signals such as electrocardiogram (ECG) and electroencephalogram (EEG) are analyzed using the tools of nonlinear systems theory. In the modern industrialized countries every year several hundred thousands of people die due to sudden cardiac death. The Electrocardiogram (ECG) is an important biosignal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. Heart rate variability analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. A computerbased intelligent system for analysis of cardiac states is very useful in diagnostics and disease management. Like many bio-signals, HRV signals are non-linear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of non-linear systems and provides good noise immunity. In this work, we studied the HOS of the HRV signals of normal heartbeat and four classes of arrhythmia. This thesis presents some general characteristics for each of these classes of HRV signals in the bispectrum and bicoherence plots. Several features were extracted from the HOS and subjected an Analysis of Variance (ANOVA) test. The results are very promising for cardiac arrhythmia classification with a number of features yielding a p-value < 0.02 in the ANOVA test. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, seven features were extracted from the heart rate signals using HOS and fed to a support vector machine (SVM) for classification. The performance evaluation protocol in this thesis uses 330 subjects consisting of five different kinds of cardiac disease conditions. The classifier achieved a sensitivity of 90% and a specificity of 89%. This system is ready to run on larger data sets. In EEG analysis, the search for hidden information for identification of seizures has a long history. Epilepsy is a pathological condition characterized by spontaneous and unforeseeable occurrence of seizures, during which the perception or behavior of patients is disturbed. An automatic early detection of the seizure onsets would help the patients and observers to take appropriate precautions. Various methods have been proposed to predict the onset of seizures based on EEG recordings. The use of nonlinear features motivated by the higher order spectra (HOS) has been reported to be a promising approach to differentiate between normal, background (pre-ictal) and epileptic EEG signals. In this work, these features are used to train both a Gaussian mixture model (GMM) classifier and a Support Vector Machine (SVM) classifier. Results show that the classifiers were able to achieve 93.11% and 92.67% classification accuracy, respectively, with selected HOS based features. About 2 hours of EEG recordings from 10 patients were used in this study. This thesis introduces unique bispectrum and bicoherence plots for various cardiac conditions and for normal, background and epileptic EEG signals. These plots reveal distinct patterns. The patterns are useful for visual interpretation by those without a deep understanding of spectral analysis such as medical practitioners. It includes original contributions in extracting features from HRV and EEG signals using HOS and entropy, in analyzing the statistical properties of such features on real data and in automated classification using these features with GMM and SVM classifiers.
Resumo:
In order to answer the practically important question of whether the down conductors of lightning protection systems to tall towers and buildings can be electrically isolated from the structure itself, this work is conducted. As a first step in this regard, it is presumed that the down conductor placed on metallic tower will be a pessimistic representation of the actual problem. This opinion was based on the fact that the proximity of heavy metallic structure will have a large damping effect. The post-stroke current distributions along the down conductors and towers, which can be quite different from that in the lightning channel, govern the post-stroke near field and the resulting gradient in the soil. Also, for a reliable estimation of the actual stroke current from the measured down conductor currents, it is essential to know the current distribution characteristics along the down conductors. In view of these, the present work attempts to deduce the post-stroke current and voltage distribution along typical down conductors and towers. A solution of the governing field equations on an electromagnetic model of the system is sought for the investigation. Simulation results providing the spatio-temporal distribution of the post-stroke current and voltage has provided very interesting results. It is concluded that it is almost impossible to achieve electrical isolation between the structure and the down conductor. Furthermore, there will be significant induction into the steel matrix of the supporting structure.
Resumo:
In this paper, we outline a systematic procedure for scaling analysis of momentum and heat transfer in laser melted pools. With suitable choices of non-dimensionalising parameters, the governing equations coupled with appropriate boundary conditions are first scaled, and the relative significance of various terms appearing in them are accordingly analysed. The analysis is then utilised to predict the orders of magnitude of some important quantities, such as the velocity scale at the top surface, velocity boundary layer thickness, maximum temperature rise in the pool, fully developed pool-depth, and time required for initiation of melting. Using the scaling predictions, the influence of various processing parameters on the system variables can be well recognised, which enables us to develop a deeper insight into the physical problem of interest. Moreover, some of the quantities predicted from the scaling analysis can be utilised for optimised selection of appropriate grid-size and time-steps for full numerical simulation of the process. The scaling predictions are finally assessed by comparison with experimental and numerical results quoted in the literature, and an excellent qualitative agreement is observed.
Resumo:
In this paper, we analyze the coexistence of a primary and a secondary (cognitive) network when both networks use the IEEE 802.11 based distributed coordination function for medium access control. Specifically, we consider the problem of channel capture by a secondary network that uses spectrum sensing to determine the availability of the channel, and its impact on the primary throughput. We integrate the notion of transmission slots in Bianchi's Markov model with the physical time slots, to derive the transmission probability of the secondary network as a function of its scan duration. This is used to obtain analytical expressions for the throughput achievable by the primary and secondary networks. Our analysis considers both saturated and unsaturated networks. By performing a numerical search, the secondary network parameters are selected to maximize its throughput for a given level of protection of the primary network throughput. The theoretical expressions are validated using extensive simulations carried out in the Network Simulator 2. Our results provide critical insights into the performance and robustness of different schemes for medium access by the secondary network. In particular, we find that the channel captures by the secondary network does not significantly impact the primary throughput, and that simply increasing the secondary contention window size is only marginally inferior to silent-period based methods in terms of its throughput performance.
Resumo:
The trapezoidal rule, which is a special case of the Newmark family of algorithms, is one of the most widely used methods for transient hyperbolic problems. In this work, we show that this rule conserves linear and angular momenta and energy in the case of undamped linear elastodynamics problems, and an ``energy-like measure'' in the case of undamped acoustic problems. These conservation properties, thus, provide a rational basis for using this algorithm. In linear elastodynamics problems, variants of the trapezoidal rule that incorporate ``high-frequency'' dissipation are often used, since the higher frequencies, which are not approximated properly by the standard displacement-based approach, often result in unphysical behavior. Instead of modifying the trapezoidal algorithm, we propose using a hybrid finite element framework for constructing the stiffness matrix. Hybrid finite elements, which are based on a two-field variational formulation involving displacement and stresses, are known to approximate the eigenvalues much more accurately than the standard displacement-based approach, thereby either bypassing or reducing the need for high-frequency dissipation. We show this by means of several examples, where we compare the numerical solutions obtained using the displacement-based and hybrid approaches against analytical solutions.
Resumo:
Two stages have been observed in micro-indentation experiment of a soft film on a hard substrate. In the first stage, the hardness of the thin film decreases with increasing depth of indentation when indentation is shallow; and in the second stage, the hardness of the film increases with increasing depth of indentation when the indenter tip approaches the hard substrate. In this paper, the new strain gradient theory is used to analyze the micro-indentation behavior of a soft film on a hard substrate. Meanwhile, the classic plastic theory is also applied to investigating the problem. Comparing two theoretical results with the experiment data, one can find that the strain gradient theory can describe the experiment data at both the shallow and deep indentation depths quite well, while the classic theory can't explain the experiment results.
Resumo:
Several algorithms for optical flow are studied theoretically and experimentally. Differential and matching methods are examined; these two methods have differing domains of application- differential methods are best when displacements in the image are small (<2 pixels) while matching methods work well for moderate displacements but do not handle sub-pixel motions. Both types of optical flow algorithm can use either local or global constraints, such as spatial smoothness. Local matching and differential techniques and global differential techniques will be examined. Most algorithms for optical flow utilize weak assumptions on the local variation of the flow and on the variation of image brightness. Strengthening these assumptions improves the flow computation. The computational consequence of this is a need for larger spatial and temporal support. Global differential approaches can be extended to local (patchwise) differential methods and local differential methods using higher derivatives. Using larger support is valid when constraint on the local shape of the flow are satisfied. We show that a simple constraint on the local shape of the optical flow, that there is slow spatial variation in the image plane, is often satisfied. We show how local differential methods imply the constraints for related methods using higher derivatives. Experiments show the behavior of these optical flow methods on velocity fields which so not obey the assumptions. Implementation of these methods highlights the importance of numerical differentiation. Numerical approximation of derivatives require care, in two respects: first, it is important that the temporal and spatial derivatives be matched, because of the significant scale differences in space and time, and, second, the derivative estimates improve with larger support.
Resumo:
Helicobacter pylori is a bacterial pathogen that affects more than half of the world’s population with gastro-intestinal diseases and is associated with gastric cancer. The cell surface of H. pylori is decorated with lipopolysaccharides (LPSs) composed of three distinct regions: a variable polysaccharide moiety (O-chain), a structurally conserved core oligosaccharide, and a lipid A region that anchors the LPS to the cell membrane. The O-chain of H. pylori LPS, exhibits unique oligosaccharide structures, such as Lewis (Le) antigens, similar to those present in the gastric mucosa and are involved in interactions with the host. Glucan, heptoglycan, and riban domains are present in the outer core region of some H. pylori LPSs. Amylose-like glycans and mannans are also constituents of some H. pylori strains, possibly co-expressed with LPSs. The complexity of H. pylori LPSs has hampered the establishment of accurate structure-function relationships in interactions with the host, and the design of carbohydrate-based therapeutics, such as vaccines. Carbohydrate microarrays are recent powerful and sensitive tools for studying carbohydrate antigens and, since their emergence, are providing insights into the function of carbohydrates and their involvement in pathogen-host interactions. The major goals of this thesis were the structural analysis of LPSs from H. pylori strains isolated from gastric biopsies of symptomatic Portuguese patients and the construction of a novel pathogen carbohydrate microarray of these LPSs (H. pylori LPS microarray) for interaction studies with proteins. LPSs were extracted from the cell surface of five H. pylori clinical isolates and one NCTC strain (26695) by phenol/water method, fractionated by size exclusion chromatography and analysed by gas chromatography coupled to mass spectrometry. The oligosaccharides released after mild acid treatment of the LPS were analysed by electrospray mass spectrometry. In addition to the conserved core oligosaccharide moieties, structural analyses revealed the presence of type-2 Lex and Ley antigens and N-acetyllactosamine (LacNAc) sequences, typically found in H. pylori strains. Also, the presence of O-6 linked glucose residues, particularly in LPSs from strains 2191 and NCTC 26695, pointed out to the expression of a 6-glucan. Other structural domains, namely ribans, composed of O-2 linked ribofuranose residues were observed in the LPS of most of H. pylori clinical isolates. For the LPS from strain 14382, large amounts of O-3 linked galactose units, pointing to the occurrence of a galactan, a domain recently identified in the LPS of another H. pylori strain. A particular feature to the LPSs from strains 2191 and CI-117 was the detection of large amounts of O-4 linked N-acetylglucosamine (GlcNAc) residues, suggesting the presence of chitin-like glycans, which to our knowledge have not been described for H. pylori strains. For the construction of the H. pylori LPS microarray, the structurally analysed LPSs, as well as LPS-derived oligosaccharide fractions, prepared as neoglycolipid (NGL) probes were noncovalently immobilized onto nitrocellulosecoated glass slides. These were printed together with NGLs of selected sequence defined oligosaccharides, bacterial LPSs and polysaccharides. The H. pylori LPS microarray was probed for recognition with carbohydratebinding proteins (CBPs) of known specificity. These included Le and blood group-related monoclonal antibodies (mAbs), plant lectins, a carbohydratebinding module (CBM) and the mammalian immune receptors DC-SIGN and Dectin-1. The analysis of these CBPs provided new information that complemented the structural analyses and was valuable in the quality control of the constructed microarray. Microarray analysis revealed the occurrence of type-2 Lex and Ley, but not type-1 Lea or Leb antigens, supporting the results obtained in the structural analysis. Furthermore, the H. pylori LPSs were recognised by DC-SIGN, a mammalian lectin known to interact with this bacterium through fucosylated Le epitopes expressed in its LPSs. The -fucose-specific lectin UEA-I, showed restricted binding to probes containing type-2 blood group H sequence and to the LPSs from strains CI-117 and 14382. The presence of H-type-2, as well Htype- 1 in the LPSs from these strains, was confirmed using specific mAbs. Although H-type-1 determinant has been reported for H. pylori LPSs, this is the first report of the presence of H-type-2 determinant. Microarray analysis also revealed that plant lectins known to bind 4-linked GlcNAc chitin oligosaccharide sequences bound H. pylori LPSs. STL, which exhibited restricted and strong binding to 4GlcNAc tri- and pentasaccharides, differentially recognised the LPS from the strain CI-117. The chitin sequences recognised in the LPS could be internal, as no binding was detected to this LPS with WGA, known to be specific for nonreducing terminal of 4GlcNAc sequence. Analyses of the H. pylori LPSs by SDS-PAGE and Western blot with STL provided further evidence for the presence of these novel domains in the O-chain region of this LPS. H. pylori LPS microarray was also applied to analysis of two human sera. The first was from a case infected with H. pylori (H. pylori+ CI-5) and the second was from a non-infected control.The analysis revealed a higher IgG-reactivity towards H. pylori LPSs in the H. pylori+ serum, than the control serum. A specific IgG response was observed to the LPS isolated from the CI-5 strain, which caused the infection. The present thesis has contributed to extension of current knowledge on chemical structures of LPS from H. pylori clinical isolates. Furthermore, the H. pylori LPS microarray constructed enabled the study of interactions with host proteins and showed promise as a tool in serological studies of H. pyloriinfected individuals. Thus, it is anticipated that the use of these complementary approaches may contribute to a better understanding of the molecular complexity of the LPSs and their role in pathogenesis.