947 resultados para Systems Architecture


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dendrimers of poly (amidoamine) (PAMAM) are nanoparticles which have proven succeed in transporting drugs due to high solubility, low toxicity and ability to control drugs release. Studies have explored the biological potential of dendrimers such as to transport genes, development of vaccines, antiviral, antibacterial and anticancer therapies. This review of literature on the PAMAM dendrimers discusses the architecture and general construction of dendrimers and intrinsic properties of the PAMAM. This study also describes how the PAMAM interact with many drugs and the potential of these macromolecules as well as drug nanocarriers in transdermal routes of administration, ocular, respiratory, oral and intravenous administration. Dendrimers promises good future prospects for the biomedicine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emerging Cyber-Physical Systems (CPSs) are envisioned to integrate computation, communication and control with the physical world. Therefore, CPS requires close interactions between the cyber and physical worlds both in time and space. These interactions are usually governed by events, which occur in the physical world and should autonomously be reflected in the cyber-world, and actions, which are taken by the CPS as a result of detection of events and certain decision mechanisms. Both event detection and action decision operations should be performed accurately and timely to guarantee temporal and spatial correctness. This calls for a flexible architecture and task representation framework to analyze CP operations. In this paper, we explore the temporal and spatial properties of events, define a novel CPS architecture, and develop a layered spatiotemporal event model for CPS. The event is represented as a function of attribute-based, temporal, and spatial event conditions. Moreover, logical operators are used to combine different types of event conditions to capture composite events. To the best of our knowledge, this is the first event model that captures the heterogeneous characteristics of CPS for formal temporal and spatial analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that control systems are the core of electronic differential systems (EDSs) in electric vehicles (EVs)/hybrid HEVs (HEVs). However, conventional closed-loop control architectures do not completely match the needed ability to reject noises/disturbances, especially regarding the input acceleration signal incoming from the driver's commands, which makes the EDS (in this case) ineffective. Due to this, in this paper, a novel EDS control architecture is proposed to offer a new approach for the traction system that can be used with a great variety of controllers (e. g., classic, artificial intelligence (AI)-based, and modern/robust theory). In addition to this, a modified proportional-integral derivative (PID) controller, an AI-based neuro-fuzzy controller, and a robust optimal H-infinity controller were designed and evaluated to observe and evaluate the versatility of the novel architecture. Kinematic and dynamic models of the vehicle are briefly introduced. Then, simulated and experimental results were presented and discussed. A Hybrid Electric Vehicle in Low Scale (HELVIS)-Sim simulation environment was employed to the preliminary analysis of the proposed EDS architecture. Later, the EDS itself was embedded in a dSpace 1103 high-performance interface board so that real-time control of the rear wheels of the HELVIS platform was successfully achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Building facilities have become important infrastructures for modern productive plants dedicated to services. In this context, the control systems of intelligent buildings have evolved while their reliability has evidently improved. However, the occurrence of faults is inevitable in systems conceived, constructed and operated by humans. Thus, a practical alternative approach is found to be very useful to reduce the consequences of faults. Yet, only few publications address intelligent building modeling processes that take into consideration the occurrence of faults and how to manage their consequences. In the light of the foregoing, a procedure is proposed for the modeling of intelligent building control systems, considersing their functional specifications in normal operation and in the of the event of faults. The proposed procedure adopts the concepts of discrete event systems and holons, and explores Petri nets and their extensions so as to represent the structure and operation of control systems for intelligent buildings under normal and abnormal situations. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Proteinaceous toxins are observed across all levels of inter-organismal and intra-genomic conflicts. These include recently discovered prokaryotic polymorphic toxin systems implicated in intra-specific conflicts. They are characterized by a remarkable diversity of C-terminal toxin domains generated by recombination with standalone toxin-coding cassettes. Prior analysis revealed a striking diversity of nuclease and deaminase domains among the toxin modules. We systematically investigated polymorphic toxin systems using comparative genomics, sequence and structure analysis. Results: Polymorphic toxin systems are distributed across all major bacterial lineages and are delivered by at least eight distinct secretory systems. In addition to type-II, these include type-V, VI, VII (ESX), and the poorly characterized "Photorhabdus virulence cassettes (PVC)", PrsW-dependent and MuF phage-capsid-like systems. We present evidence that trafficking of these toxins is often accompanied by autoproteolytic processing catalyzed by HINT, ZU5, PrsW, caspase-like, papain-like, and a novel metallopeptidase associated with the PVC system. We identified over 150 distinct toxin domains in these systems. These span an extraordinary catalytic spectrum to include 23 distinct clades of peptidases, numerous previously unrecognized versions of nucleases and deaminases, ADP-ribosyltransferases, ADP ribosyl cyclases, RelA/SpoT-like nucleotidyltransferases, glycosyltranferases and other enzymes predicted to modify lipids and carbohydrates, and a pore-forming toxin domain. Several of these toxin domains are shared with host-directed effectors of pathogenic bacteria. Over 90 families of immunity proteins might neutralize anywhere between a single to at least 27 distinct types of toxin domains. In some organisms multiple tandem immunity genes or immunity protein domains are organized into polyimmunity loci or polyimmunity proteins. Gene-neighborhood-analysis of polymorphic toxin systems predicts the presence of novel trafficking-related components, and also the organizational logic that allows toxin diversification through recombination. Domain architecture and protein-length analysis revealed that these toxins might be deployed as secreted factors, through directed injection, or via inter-cellular contact facilitated by filamentous structures formed by RHS/YD, filamentous hemagglutinin and other repeats. Phyletic pattern and life-style analysis indicate that polymorphic toxins and polyimmunity loci participate in cooperative behavior and facultative 'cheating' in several ecosystems such as the human oral cavity and soil. Multiple domains from these systems have also been repeatedly transferred to eukaryotes and their viruses, such as the nucleo-cytoplasmic large DNA viruses. Conclusions: Along with a comprehensive inventory of toxins and immunity proteins, we present several testable predictions regarding active sites and catalytic mechanisms of toxins, their processing and trafficking and their role in intra-specific and inter-specific interactions between bacteria. These systems provide insights regarding the emergence of key systems at different points in eukaryotic evolution, such as ADP ribosylation, interaction of myosin VI with cargo proteins, mediation of apoptosis, hyphal heteroincompatibility, hedgehog signaling, arthropod toxins, cell-cell interaction molecules like teneurins and different signaling messengers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The study of myofiber reorganization in the remote zone after myocardial infarction has been performed in 2D. Microstructural reorganization in remodeled hearts, however, can only be fully appreciated by considering myofibers as continuous 3D entities. The aim of this study was therefore to develop a technique for quantitative 3D diffusion CMR tractography of the heart, and to apply this method to quantify fiber architecture in the remote zone of remodeled hearts. Methods: Diffusion Tensor CMR of normal human, sheep, and rat hearts, as well as infarcted sheep hearts was performed ex vivo. Fiber tracts were generated with a fourth-order Runge-Kutta integration technique and classified statistically by the median, mean, maximum, or minimum helix angle (HA) along the tract. An index of tract coherence was derived from the relationship between these HA statistics. Histological validation was performed using phase-contrast microscopy. Results: In normal hearts, the subendocardial and subepicardial myofibers had a positive and negative HA, respectively, forming a symmetric distribution around the midmyocardium. However, in the remote zone of the infarcted hearts, a significant positive shift in HA was observed. The ratio between negative and positive HA variance was reduced from 0.96 +/- 0.16 in normal hearts to 0.22 +/- 0.08 in the remote zone of the remodeled hearts (p<0.05). This was confirmed histologically by the reduction of HA in the subepicardium from -52.03 degrees +/- 2.94 degrees in normal hearts to -37.48 degrees +/- 4.05 degrees in the remote zone of the remodeled hearts (p < 0.05). Conclusions: A significant reorganization of the 3D fiber continuum is observed in the remote zone of remodeled hearts. The positive (rightward) shift in HA in the remote zone is greatest in the subepicardium, but involves all layers of the myocardium. Tractography-based quantification, performed here for the first time in remodeled hearts, may provide a framework for assessing regional changes in the left ventricle following infarction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Breakthrough advances in microprocessor technology and efficient power management have altered the course of development of processors with the emergence of multi-core processor technology, in order to bring higher level of processing. The utilization of many-core technology has boosted computing power provided by cluster of workstations or SMPs, providing large computational power at an affordable cost using solely commodity components. Different implementations of message-passing libraries and system softwares (including Operating Systems) are installed in such cluster and multi-cluster computing systems. In order to guarantee correct execution of message-passing parallel applications in a computing environment other than that originally the parallel application was developed, review of the application code is needed. In this paper, a hybrid communication interfacing strategy is proposed, to execute a parallel application in a group of computing nodes belonging to different clusters or multi-clusters (computing systems may be running different operating systems and MPI implementations), interconnected with public or private IP addresses, and responding interchangeably to user execution requests. Experimental results demonstrate the feasibility of this proposed strategy and its effectiveness, through the execution of benchmarking parallel applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A complete census of planetary systems around a volume-limited sample of solar-type stars (FGK dwarfs) in the Solar neighborhood (d a parts per thousand currency signaEuro parts per thousand 15 pc) with uniform sensitivity down to Earth-mass planets within their Habitable Zones out to several AUs would be a major milestone in extrasolar planets astrophysics. This fundamental goal can be achieved with a mission concept such as NEAT-the Nearby Earth Astrometric Telescope. NEAT is designed to carry out space-borne extremely-high-precision astrometric measurements at the 0.05 mu as (1 sigma) accuracy level, sufficient to detect dynamical effects due to orbiting planets of mass even lower than Earth's around the nearest stars. Such a survey mission would provide the actual planetary masses and the full orbital geometry for all the components of the detected planetary systems down to the Earth-mass limit. The NEAT performance limits can be achieved by carrying out differential astrometry between the targets and a set of suitable reference stars in the field. The NEAT instrument design consists of an off-axis parabola single-mirror telescope (D = 1 m), a detector with a large field of view located 40 m away from the telescope and made of 8 small movable CCDs located around a fixed central CCD, and an interferometric calibration system monitoring dynamical Young's fringes originating from metrology fibers located at the primary mirror. The mission profile is driven by the fact that the two main modules of the payload, the telescope and the focal plane, must be located 40 m away leading to the choice of a formation flying option as the reference mission, and of a deployable boom option as an alternative choice. The proposed mission architecture relies on the use of two satellites, of about 700 kg each, operating at L2 for 5 years, flying in formation and offering a capability of more than 20,000 reconfigurations. The two satellites will be launched in a stacked configuration using a Soyuz ST launch vehicle. The NEAT primary science program will encompass an astrometric survey of our 200 closest F-, G- and K-type stellar neighbors, with an average of 50 visits each distributed over the nominal mission duration. The main survey operation will use approximately 70% of the mission lifetime. The remaining 30% of NEAT observing time might be allocated, for example, to improve the characterization of the architecture of selected planetary systems around nearby targets of specific interest (low-mass stars, young stars, etc.) discovered by Gaia, ground-based high-precision radial-velocity surveys, and other programs. With its exquisite, surgical astrometric precision, NEAT holds the promise to provide the first thorough census for Earth-mass planets around stars in the immediate vicinity of our Sun.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study aims to analyze the IT architecture management practices associated with their degree of maturity and the influence of institutional and strategic factors on the decisions involved through a case study in a large telecom organization. The case study allowed us to identify practices that led the company to its current stage of maturity and identify practices that can lead the company to the next stage. The strategic influence was mentioned by most respondents and the institutional influence was present in decisions related to innovation and those dealing with a higher level of uncertainties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last years, Intelligent Tutoring Systems have been a very successful way for improving learning experience. Many issues must be addressed until this technology can be defined mature. One of the main problems within the Intelligent Tutoring Systems is the process of contents authoring: knowledge acquisition and manipulation processes are difficult tasks because they require a specialised skills on computer programming and knowledge engineering. In this thesis we discuss a general framework for knowledge management in an Intelligent Tutoring System and propose a mechanism based on first order data mining to partially automate the process of knowledge acquisition that have to be used in the ITS during the tutoring process. Such a mechanism can be applied in Constraint Based Tutor and in the Pseudo-Cognitive Tutor. We design and implement a part of the proposed architecture, mainly the module of knowledge acquisition from examples based on first order data mining. We then show that the algorithm can be applied at least two different domains: first order algebra equation and some topics of C programming language. Finally we discuss the limitation of current approach and the possible improvements of the whole framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. One of the phenomena observed in human aging is the progressive increase of a systemic inflammatory state, a condition referred to as “inflammaging”, negatively correlated with longevity. A prominent mediator of inflammation is the transcription factor NF-kB, that acts as key transcriptional regulator of many genes coding for pro-inflammatory cytokines. Many different signaling pathways activated by very diverse stimuli converge on NF-kB, resulting in a regulatory network characterized by high complexity. NF-kB signaling has been proposed to be responsible of inflammaging. Scope of this analysis is to provide a wider, systemic picture of such intricate signaling and interaction network: the NF-kB pathway interactome. Methods. The study has been carried out following a workflow for gathering information from literature as well as from several pathway and protein interactions databases, and for integrating and analyzing existing data and the relative reconstructed representations by using the available computational tools. Strong manual intervention has been necessarily used to integrate data from multiple sources into mathematically analyzable networks. The reconstruction of the NF-kB interactome pursued with this approach provides a starting point for a general view of the architecture and for a deeper analysis and understanding of this complex regulatory system. Results. A “core” and a “wider” NF-kB pathway interactome, consisting of 140 and 3146 proteins respectively, were reconstructed and analyzed through a mathematical, graph-theoretical approach. Among other interesting features, the topological characterization of the interactomes shows that a relevant number of interacting proteins are in turn products of genes that are controlled and regulated in their expression exactly by NF-kB transcription factors. These “feedback loops”, not always well-known, deserve deeper investigation since they may have a role in tuning the response and the output consequent to NF-kB pathway initiation, in regulating the intensity of the response, or its homeostasis and balance in order to make the functioning of such critical system more robust and reliable. This integrated view allows to shed light on the functional structure and on some of the crucial nodes of thet NF-kB transcription factors interactome. Conclusion. Framing structure and dynamics of the NF-kB interactome into a wider, systemic picture would be a significant step toward a better understanding of how NF-kB globally regulates diverse gene programs and phenotypes. This study represents a step towards a more complete and integrated view of the NF-kB signaling system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The introduction of dwarfed rootstocks in apple crop has led to a new concept of intensive planting systems with the aim of producing early high yield and with returns of the initial high investment. Although yield is an important aspect to the grower, the consumer has become demanding regards fruit quality and is generally attracted by appearance. To fulfil the consumer’s expectations the grower may need to choose a proper training system along with an ideal pruning technique, which ensure a good light distribution in different parts of the canopy and a marketable fruit quality in terms of size and skin colour. Although these aspects are important, these fruits might not reach the proper ripening stage within the canopy because they are often heterogeneous. To describe the variability present in a tree, a software (PlantToon®), was used to recreate the tree architecture in 3D in the two training systems. The ripening stage of each of the fruits was determined using a non-destructive device (DA-Meter), thus allowing to estimate the fruit ripening variability. This study deals with some of the main parameters that can influence fruit quality and ripening stage within the canopy and orchard management techniques that can ameliorate a ripening fruit homogeneity. Significant differences in fruit quality were found within the canopies due to their position, flowering time and bud wood age. Bi-axis appeared to be suitable for high density planting, even though the fruit quality traits resulted often similar to those obtained with a Slender Spindle, suggesting similar fruit light availability within the canopies. Crop load confirmed to be an important factor that influenced fruit quality as much as the interesting innovative pruning method “Click”, in intensive planting systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.