912 resultados para Redundant Residue Number Systems
Resumo:
A detailed theoretical study of the 1,7,1l,17-tetraoxa-2,6,12,16-tetraaza-cycloeicosane ligand ([20]AneN(4)O(4)) coordinated to Fe2+, Co2+, Ni2+, Ru2+, Rh2+, and Pd2+ transition metal ions was carried out with the B3LYP method. Two different cases were performed: when nitrogen is the donor atom (1a (q) ) and also with the oxygen as the donor atom (1b (q) ). For all the cases performed in this study 1a (q) structures were always more stable than the 1b (q) ones. Considering each row is possible to see that the energy increases with the increase of the atomic number. The M2+ cation binding energies for the 1a (q) complexes increase with the following order: Fe2+ < Ru2+ < Co2+ < Ni2+ < Rh2+ < Pd2+.
Resumo:
In this paper, we perform a thorough analysis of a spectral phase-encoded time spreading optical code division multiple access (SPECTS-OCDMA) system based on Walsh-Hadamard (W-H) codes aiming not only at finding optimal code-set selections but also at assessing its loss of security due to crosstalk. We prove that an inadequate choice of codes can make the crosstalk between active users to become large enough so as to cause the data from the user of interest to be detected by other user. The proposed algorithm for code optimization targets code sets that produce minimum bit error rate (BER) among all codes for a specific number of simultaneous users. This methodology allows us to find optimal code sets for any OCDMA system, regardless the code family used and the number of active users. This procedure is crucial for circumventing the unexpected lack of security due to crosstalk. We also show that a SPECTS-OCDMA system based on W-H 32(64) fundamentally limits the number of simultaneous users to 4(8) with no security violation due to crosstalk. More importantly, we prove that only a small fraction of the available code sets is actually immune to crosstalk with acceptable BER (<10(-9)) i.e., approximately 0.5% for W-H 32 with four simultaneous users, and about 1 x 10(-4)% for W-H 64 with eight simultaneous users.
Resumo:
The measurement called accessibility has been proposed as a means to quantify the efficiency of the communication between nodes in complex networks. This article reports results regarding the properties of accessibility, including its relationship with the average minimal time to visit all nodes reachable after h steps along a random walk starting from a source, as well as the number of nodes that are visited after a finite period of time. We characterize the relationship between accessibility and the average number of walks required in order to visit all reachable nodes (the exploration time), conjecture that the maximum accessibility implies the minimal exploration time, and confirm the relationship between the accessibility values and the number of nodes visited after a basic time unit. The latter relationship is investigated with respect to three types of dynamics: traditional random walks, self-avoiding random walks, and preferential random walks.
Resumo:
The existing characterization of stability regions was developed under the assumption that limit sets on the stability boundary are exclusively composed of hyperbolic equilibrium points and closed orbits. The characterizations derived in this technical note are a generalization of existing results in the theory of stability regions. A characterization of the stability boundary of general autonomous nonlinear dynamical systems is developed under the assumption that limit sets on the stability boundary are composed of a countable number of disjoint and indecomposable components, which can be equilibrium points, closed orbits, quasi-periodic solutions and even chaotic invariant sets.
Resumo:
The ability to entrap drugs within vehicles and subsequently release them has led to new treatments for a number of diseases. Based on an associative phase separation and interfacial diffusion approach, we developed a way to prepare DNA gel particles without adding any kind of cross-linker or organic solvent. Among the various agents studied, cationic surfactants offered particularly efficient control for encapsulation and DNA release from these DNA gel particles. The driving force for this strong association is the electrostatic interaction between the two components, as induced by the entropic increase due to the release of the respective counter-ions. However, little is known about the influence of the respective counter-ions on this surfactant-DNA interaction. Here we examined the effect of different counter-ions on the formation and properties of the DNA gel particles by mixing DNA (either single-(ssDNA) or double-stranded (dsDNA)) with the single chain surfactant dodecyltrimethylammonium (DTA). In particular, we used as counter-ions of this surfactant the hydrogen sulfate and trifluoromethane sulfonate anions and the two halides, chloride and bromide. Effects on the morphology of the particles obtained, the encapsulation of DNA and its release, as well as the haemocompatibility of these particles are presented, using counter-ion structure and DNA conformation as controlling parameters. Analysis of the data indicates that the degree of counter-ion dissociation from the surfactant micelles and the polar/hydrophobic character of the counter-ion are important parameters in the final properties of the particles. The stronger interaction with amphiphiles for ssDNA than for dsDNA suggests the important role of hydrophobic interactions in DNA.
Resumo:
Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.
Resumo:
Network reconfiguration for service restoration (SR) in distribution systems is a complex optimization problem. For large-scale distribution systems, it is computationally hard to find adequate SR plans in real time since the problem is combinatorial and non-linear, involving several constraints and objectives. Two Multi-Objective Evolutionary Algorithms that use Node-Depth Encoding (NDE) have proved able to efficiently generate adequate SR plans for large distribution systems: (i) one of them is the hybridization of the Non-Dominated Sorting Genetic Algorithm-II (NSGA-II) with NDE, named NSGA-N; (ii) the other is a Multi-Objective Evolutionary Algorithm based on subpopulation tables that uses NDE, named MEAN. Further challenges are faced now, i.e. the design of SR plans for larger systems as good as those for relatively smaller ones and for multiple faults as good as those for one fault (single fault). In order to tackle both challenges, this paper proposes a method that results from the combination of NSGA-N, MEAN and a new heuristic. Such a heuristic focuses on the application of NDE operators to alarming network zones according to technical constraints. The method generates similar quality SR plans in distribution systems of significantly different sizes (from 3860 to 30,880 buses). Moreover, the number of switching operations required to implement the SR plans generated by the proposed method increases in a moderate way with the number of faults.
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.
Resumo:
[EN]Many different complex systems depend on a large number n of mutually independent random Boolean variables. The most useful representation for these systems –usually called complex stochastic Boolean systems (CSBSs)– is the intrinsic order graph. This is a directed graph on 2n vertices, corresponding to the 2n binary n-tuples (u1, . . . , un) ∈ {0, 1} n of 0s and 1s. In this paper, different duality properties of the intrinsic order graph are rigorously analyzed in detail. The results can be applied to many CSBSs arising from any scientific, technical or social area…
Resumo:
[EN]A complex stochastic Boolean system (CSBS) is a complex system depending on an arbitrarily large number
Resumo:
[EN]A complex stochastic Boolean system (CSBS) is a system depending on an arbitrary number n of stochastic Boolean variables. The analysis of CSBSs is mainly based on the intrinsic order: a partial order relation defined on the set f0; 1gn of binary n-tuples. The usual graphical representation for a CSBS is the intrinsic order graph: the Hasse diagram of the intrinsic order. In this paper, some new properties of the intrinsic order graph are studied. Particularly, the set and the number of its edges, the degree and neighbors of each vertex, as well as typical properties, such as the symmetry and fractal structure of this graph, are analyzed…
Resumo:
Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques.
Resumo:
In case of severe osteoarthritis at the knee causing pain, deformity, and loss of stability and mobility, the clinicians consider that the substitution of these surfaces by means of joint prostheses. The objectives to be pursued by this surgery are: complete pain elimination, restoration of the normal physiological mobility and joint stability, correction of all deformities and, thus, of limping. The knee surgical navigation systems have bee developed in computer-aided surgery in order to improve the surgical final outcome in total knee arthroplasty. These systems provide the surgeon with quantitative and real-time information about each surgical action, like bone cut executions and prosthesis component alignment, by mean of tracking tools rigidly fixed onto the femur and the tibia. Nevertheless, there is still a margin of error due to the incorrect surgical procedures and to the still limited number of kinematic information provided by the current systems. Particularly, patello-femoral joint kinematics is not considered in knee surgical navigation. It is also unclear and, thus, a source of misunderstanding, what the most appropriate methodology is to study the patellar motion. In addition, also the knee ligamentous apparatus is superficially considered in navigated total knee arthroplasty, without taking into account how their physiological behavior is altered by this surgery. The aim of the present research work was to provide new functional and biomechanical assessments for the improvement of the surgical navigation systems for joint replacement in the human lower limb. This was mainly realized by means of the identification and development of new techniques that allow a thorough comprehension of the functioning of the knee joint, with particular attention to the patello-femoral joint and to the main knee soft tissues. A knee surgical navigation system with active markers was used in all research activities presented in this research work. Particularly, preliminary test were performed in order to assess the system accuracy and the robustness of a number of navigation procedures. Four studies were performed in-vivo on patients requiring total knee arthroplasty and randomly implanted by means of traditional and navigated procedures in order to check for the real efficacy of the latter with respect to the former. In order to cope with assessment of patello-femoral joint kinematics in the intact and replaced knees, twenty in-vitro tests were performed by using a prototypal tracking tool also for the patella. In addition to standard anatomical and articular recommendations, original proposals for defining the patellar anatomical-based reference frame and for studying the patello-femoral joint kinematics were reported and used in these tests. These definitions were applied to two further in-vitro tests in which, for the first time, also the implant of patellar component insert was fully navigated. In addition, an original technique to analyze the main knee soft tissues by means of anatomical-based fiber mappings was also reported and used in the same tests. The preliminary instrumental tests revealed a system accuracy within the millimeter and a good inter- and intra-observer repeatability in defining all anatomical reference frames. In in-vivo studies, the general alignments of femoral and tibial prosthesis components and of the lower limb mechanical axis, as measured on radiographs, was more satisfactory, i.e. within ±3°, in those patient in which total knee arthroplasty was performed by navigated procedures. As for in-vitro tests, consistent patello-femoral joint kinematic patterns were observed over specimens throughout the knee flexion arc. Generally, the physiological intact knee patellar motion was not restored after the implant. This restoration was successfully achieved in the two further tests where all component implants, included the patellar insert, were fully navigated, i.e. by means of intra-operative assessment of also patellar component positioning and general tibio-femoral and patello-femoral joint assessment. The tests for assessing the behavior of the main knee ligaments revealed the complexity of the latter and the different functional roles played by the several sub-bundles compounding each ligament. Also in this case, total knee arthroplasty altered the physiological behavior of these knee soft tissues. These results reveal in-vitro the relevance and the feasibility of the applications of new techniques for accurate knee soft tissues monitoring, patellar tracking assessment and navigated patellar resurfacing intra-operatively in the contest of the most modern operative techniques. This present research work gives a contribution to the much controversial knowledge on the normal and replaced of knee kinematics by testing the reported new methodologies. The consistence of these results provides fundamental information for the comprehension and improvements of knee orthopedic treatments. In the future, the reported new techniques can be safely applied in-vivo and also adopted in other joint replacements.
Resumo:
Background. One of the phenomena observed in human aging is the progressive increase of a systemic inflammatory state, a condition referred to as “inflammaging”, negatively correlated with longevity. A prominent mediator of inflammation is the transcription factor NF-kB, that acts as key transcriptional regulator of many genes coding for pro-inflammatory cytokines. Many different signaling pathways activated by very diverse stimuli converge on NF-kB, resulting in a regulatory network characterized by high complexity. NF-kB signaling has been proposed to be responsible of inflammaging. Scope of this analysis is to provide a wider, systemic picture of such intricate signaling and interaction network: the NF-kB pathway interactome. Methods. The study has been carried out following a workflow for gathering information from literature as well as from several pathway and protein interactions databases, and for integrating and analyzing existing data and the relative reconstructed representations by using the available computational tools. Strong manual intervention has been necessarily used to integrate data from multiple sources into mathematically analyzable networks. The reconstruction of the NF-kB interactome pursued with this approach provides a starting point for a general view of the architecture and for a deeper analysis and understanding of this complex regulatory system. Results. A “core” and a “wider” NF-kB pathway interactome, consisting of 140 and 3146 proteins respectively, were reconstructed and analyzed through a mathematical, graph-theoretical approach. Among other interesting features, the topological characterization of the interactomes shows that a relevant number of interacting proteins are in turn products of genes that are controlled and regulated in their expression exactly by NF-kB transcription factors. These “feedback loops”, not always well-known, deserve deeper investigation since they may have a role in tuning the response and the output consequent to NF-kB pathway initiation, in regulating the intensity of the response, or its homeostasis and balance in order to make the functioning of such critical system more robust and reliable. This integrated view allows to shed light on the functional structure and on some of the crucial nodes of thet NF-kB transcription factors interactome. Conclusion. Framing structure and dynamics of the NF-kB interactome into a wider, systemic picture would be a significant step toward a better understanding of how NF-kB globally regulates diverse gene programs and phenotypes. This study represents a step towards a more complete and integrated view of the NF-kB signaling system.
Resumo:
Die vorliegende Arbeit beschäftigt sich mit dem Einfluß von Kettenverzweigungen unterschiedlicher Topologien auf die statischen Eigenschaften von Polymeren. Diese Untersuchungen werden mit Hilfe von Monte-Carlo- und Molekular-Dynamik-Simulationen durchgeführt.Zunächst werden einige theoretische Konzepte und Modelle eingeführt, welche die Beschreibung von Polymerketten auf mesoskopischen Längenskalen gestatten. Es werden wichtige Bestimmungsgrößen eingeführt und erläutert, welche zur quantitativen Charakterisierung von Verzweigungsstrukturen bei Polymeren geeignet sind. Es wird ebenso auf die verwendeten Optimierungstechniken eingegangen, die bei der Implementierung des Computerprogrammes Verwendung fanden. Untersucht werden neben linearen Polymerketten unterschiedliche Topolgien -Sternpolymere mit variabler Armzahl, Übergang von Sternpolymeren zu linearen Polymeren, Ketten mit variabler Zahl von Seitenketten, reguläre Dendrimere und hyperverzweigte Strukturen - in Abhängigkeit von der Lösungsmittelqualität. Es wird zunächst eine gründliche Analyse des verwendeten Simulationsmodells an sehr langen linearen Einzelketten vorgenommen. Die Skalierungseigenschaften der linearen Ketten werden untersucht in dem gesamten Lösungsmittelbereich vom guten Lösungsmittel bis hin zu weitgehend kollabierten Ketten im schlechten Lösungsmittel. Ein wichtiges Ergebnis dieser Arbeit ist die Bestätigung der Korrekturen zum Skalenverhalten des hydrodynamischen Radius Rh. Dieses Ergebnis war möglich aufgrund der großen gewählten Kettenlängen und der hohen Qualität der erhaltenen Daten in dieser Arbeit, insbesondere bei den linearen ketten, und es steht im Widerspruch zu vielen bisherigen Simulations-Studien und experimentellen Arbeiten. Diese Korrekturen zum Skalenverhalten wurden nicht nur für die linearen Ketten, sondern auch für Sternpolymere mit unterchiedlicher Armzahl gezeigt. Für lineare Ketten wird der Einfluß von Polydispersität untersucht.Es wird gezeigt, daß eine eindeutige Abbildung von Längenskalen zwischen Simulationsmodell und Experiment nicht möglich ist, da die zu diesem Zweck verwendete dimensionslose Größe eine zu schwache Abhängigkeit von der Polymerisation der Ketten besitzt. Ein Vergleich von Simulationsdaten mit industriellem Low-Density-Polyäthylen(LDPE) zeigt, daß LDPE in Form von stark verzweigten Ketten vorliegt.Für reguläre Dendrimere konnte ein hochgradiges Zurückfalten der Arme in die innere Kernregion nachgewiesen werden.