870 resultados para Modular integrated utility systems.
Resumo:
Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.
Resumo:
The energy harvesting research field has grown considerably in the last decade due to increasing interests in energy autonomous sensing systems, which require smart and efficient interfaces for extracting power from energy source and power management (PM) circuits. This thesis investigates the design trade-offs for minimizing the intrinsic power of PM circuits, in order to allow operation with very weak energy sources. For validation purposes, three different integrated power converter and PM circuits for energy harvesting applications are presented. They have been designed for nano-power operations and single-source converters can operate with input power lower than 1 μW. The first IC is a buck-boost converter for piezoelectric transducers (PZ) implementing Synchronous Electrical Charge Extraction (SECE), a non-linear energy extraction technique. Moreover, Residual Charge Inversion technique is exploited for extracting energy from PZ with weak and irregular excitations (i.e. lower voltage), and the implemented PM policy, named Two-Way Energy Storage, considerably reduces the start-up time of the converter, improving the overall conversion efficiency. The second proposed IC is a general-purpose buck-boost converter for low-voltage DC energy sources, up to 2.5 V. An ultra-low-power MPPT circuit has been designed in order to track variations of source power. Furthermore, a capacitive boost circuit has been included, allowing the converter start-up from a source voltage VDC0 = 223 mV. A nano-power programmable linear regulator is also included in order to provide a stable voltage to the load. The third IC implements an heterogeneous multisource buck-boost converter. It provides up to 9 independent input channels, of which 5 are specific for PZ (with SECE) and 4 for DC energy sources with MPPT. The inductor is shared among channels and an arbiter, designed with asynchronous logic to reduce the energy consumption, avoids simultaneous access to the buck-boost core, with a dynamic schedule based on source priority.
Resumo:
Traditional cell culture models have limitations in extrapolating functional mechanisms that underlie strategies of microbial virulence. Indeed during the infection the pathogens adapt to different tissue-specific environmental factors. The development of in vitro models resembling human tissue physiology might allow the replacement of inaccurate or aberrant animal models. Three-dimensional (3D) cell culture systems are more reliable and more predictive models that can be used for the meaningful dissection of host–pathogen interactions. The lung and gut mucosae often represent the first site of exposure to pathogens and provide a physical barrier against their entry. Within this context, the tracheobronchial and small intestine tract were modelled by tissue engineering approach. The main work was focused on the development and the extensive characterization of a human organotypic airway model, based on a mechanically supported co-culture of normal primary cells. The regained morphological features, the retrieved environmental factors and the presence of specific epithelial subsets resembled the native tissue organization. In addition, the respiratory model enabled the modular insertion of interesting cell types, such as innate immune cells or multipotent stromal cells, showing a functional ability to release pertinent cytokines differentially. Furthermore this model responded imitating known events occurring during the infection by Non-typeable H. influenzae. Epithelial organoid models, mimicking the small intestine tract, were used for a different explorative analysis of tissue-toxicity. Further experiments led to detection of a cell population targeted by C. difficile Toxin A and suggested a role in the impairment of the epithelial homeostasis by the bacterial virulence machinery. The described cell-centered strategy can afford critical insights in the evaluation of the host defence and pathogenic mechanisms. The application of these two models may provide an informing step that more coherently defines relevant molecular interactions happening during the infection.
Resumo:
Since the Three Mile Island Unit 2 (TMI-2), accident in 1979 which led to the meltdown of about one half of the reactor core and to limited releases of radioactive materials to the environment, an important international effort has been made on severe accident research. The present work aims to investigate the behaviour of a Small Modular Reactor during severe accident conditions. In order to perform these analyses, a SMR has been studied for the European reference severe accident analysis code ASTEC, developed by IRSN and GRS. In the thesis will be described in detail the IRIS Small Modular Reactor; the reference reactor chosen to develop the ASTEC input deck. The IRIS model was developed in the framework of a research collaboration with the IRSN development team. In the thesis will be described systematically the creation of the ASTEC IRIS input deck: the nodalization scheme adopted, the solution used to simulate the passive safety systems and the strong interaction between the reactor vessel and the containment. The ASTEC SMR model will be tested against the RELAP-GOTHIC coupled code model, with respect to a Design Basis Accident, to evaluate the capability of the ASTEC code on reproducing correctly the behaviour of the nuclear system. Once the model has been validated, a severe accident scenario will be simulated and the obtained results along with the nuclear system response will be analysed.
Resumo:
Management Control System (MCS) research is undergoing turbulent times. For a long time related to cybernetic instruments of management accounting only, MCS are increasingly seen as complex systems comprising not only formal accounting-driven instruments, but also informal mechanisms of control based on organizational culture. But not only have the means of MCS changed; researchers increasingly ap-ply MCS to organizational goals other than strategy implementation.rnrnTaking the question of "How do I design a well-performing MCS?" as a starting point, this dissertation aims at providing a comprehensive and integrated overview of the "current-state" of MCS research. Opting for a definition of MCS, broad in terms of means (all formal as well as informal MCS instruments), but focused in terms of objectives (behavioral control only), the dissertation contributes to MCS theory by, a) developing an integrated (contingency) model of MCS, describing its contingencies, as well as its subcomponents, b) refining the equifinality model of Gresov/Drazin (1997), c) synthesizing research findings from contingency and configuration research concerning MCS, taking into account case studies on research topics such as ambi-dexterity, equifinality and time as a contingency.
Resumo:
Silicon-on-insulator (SOI) is rapidly emerging as a very promising material platform for integrated photonics. As it combines the potential for optoelectronic integration with the low-cost and large volume manufacturing capabilities and they are already accumulate a huge amount of applications in areas like sensing, quantum optics, optical telecommunications and metrology. One of the main limitations of current technology is that waveguide propagation losses are still much higher than in standard glass-based platform because of many reasons such as bends, surface roughness and the very strong optical confinement provided by SOI. Such high loss prevents the fabrication of efficient optical resonators and complex devices severely limiting the current potential of the SOI platform. The project in the first part deals with the simple waveguides loss problem and trying to link that with the polarization problem and the loss based on Fabry-Perot Technique. The second part of the thesis deals with the Bragg Grating characterization from again the point of view of the polarization effect which leads to a better stop-band use filters. To a better comprehension a brief review on the basics of the SOI and the integrated Bragg grating ends up with the fabrication techniques and some of its applications will be presented in both parts, until the end of both the third and the fourth chapters to some results which hopefully make its precedent explanations easier to deal with.
Resumo:
In the last 10 years the number of mobile devices has grown rapidly. Each person usually brings at least two personal devices and researchers says that in a near future this number could raise up to ten devices per person. Moreover, all the devices are becoming more integrated to our life than in the past, therefore the amount of data exchanged increases accordingly to the improvement of people's lifestyle. This is what researchers call Internet of Things. Thus, in the future there will be more than 60 billions of nodes and the current infrastructure is not ready to keep track of all the exchanges of data between them. Therefore, infrastructure improvements have been proposed in the last years, like MobileIP and HIP in order to facilitate the exchange of packets in mobility, however none of them have been optimized for the purpose. In the last years, researchers from Mid Sweden University created The MediaSense Framework. Initially, this framework was based on the Chord protocol in order to route packets in a big network, but the most important change has been the introduction of PGrids in order to create the Overlay and the persistence. Thanks to this technology, a lookup in the trie takes up to 0.5*log(N), where N is the total number of nodes in the network. This result could be improved by further optimizations on the management of the nodes, for example by the dynamic creation of groups of nodes. Moreover, since the nodes move, an underlaying support for connectivity management is needed. SCTP has been selected as one of the most promising upcoming standards for simultaneous multiple connection's management.
Resumo:
Our generation of computational scientists is living in an exciting time: not only do we get to pioneer important algorithms and computations, we also get to set standards on how computational research should be conducted and published. From Euclid’s reasoning and Galileo’s experiments, it took hundreds of years for the theoretical and experimental branches of science to develop standards for publication and peer review. Computational science, rightly regarded as the third branch, can walk the same road much faster. The success and credibility of science are anchored in the willingness of scientists to expose their ideas and results to independent testing and replication by other scientists. This requires the complete and open exchange of data, procedures and materials. The idea of a “replication by other scientists” in reference to computations is more commonly known as “reproducible research”. In this context the journal “EAI Endorsed Transactions on Performance & Modeling, Simulation, Experimentation and Complex Systems had the exciting and original idea to make the scientist able to submit simultaneously the article and the computation materials (software, data, etc..) which has been used to produce the contents of the article. The goal of this procedure is to allow the scientific community to verify the content of the paper, reproducing it in the platform independently from the OS chosen, confirm or invalidate it and especially allow its reuse to reproduce new results. This procedure is therefore not helpful if there is no minimum methodological support. In fact, the raw data sets and the software are difficult to exploit without the logic that guided their use or their production. This led us to think that in addition to the data sets and the software, an additional element must be provided: the workflow that relies all of them.
Resumo:
In the last years, the European countries have paid increasing attention to renewable sources and greenhouse emissions. The Council of the European Union and the European Parliament have established ambitious targets for the next years. In this scenario, biomass plays a prominent role since its life cycle produces a zero net carbon dioxide emission. Additionally, biomass can ensure plant operation continuity thanks to its availability and storage ability. Several conventional systems running on biomass are available at the moment. Most of them are performant either in the large-scale or in the small power range. The absence of an efficient system on the small-middle scale inspired this thesis project. The object is an innovative plant based on a wet indirectly fired gas turbine (WIFGT) integrated with an organic Rankine cycle (ORC) unit for combined heat and power production. The WIFGT is a performant system in the small-middle power range; the ORC cycle is capable of giving value to low-temperature heat sources. Their integration is investigated in this thesis with the aim of carrying out a preliminary design of the components. The targeted plant output is around 200 kW in order not to need a wide cultivation area and to avoid biomass shipping. Existing in-house simulation tools are used: They are adapted to this purpose. Firstly the WIFGT + ORC model is built; Zero-dimensional models of heat exchangers, compressor, turbines, furnace, dryer and pump are used. Different fluids are selected but toluene and benzene turn out to be the most suitable. In the indirectly fired gas turbine a pressure ratio around 4 leads to the highest efficiency. From the thermodynamic analysis the system shows an electric efficiency of 38%, outdoing other conventional plants in the same power range. The combined plant is designed to recover thermal energy: Water is used as coolant in the condenser. It is heated from 60°C up to 90°C, ensuring the possibility of space heating. Mono-dimensional models are used to design the heat exchange equipment. Different types of heat exchangers are chosen depending on the working temperature. A finned-plate heat exchanger is selected for the WIFGT heat transfer equipment due to the high temperature, oxidizing and corrosive environment. A once-through boiler with finned tubes is chosen to vaporize the organic fluid in the ORC. A plate heat exchanger is chosen for the condenser and recuperator. A quasi-monodimensional model for single-stage axial turbine is implemented to design both the WIFGT and the ORC turbine. The system simulation after the components design shows an electric efficiency around 34% with a decrease by 10% compared to the zero-dimensional analysis. The work exhibits the system potentiality compared to the existing plants from both technical and economic point of view.
Resumo:
This paper provides a description of integrated engineering workstations (IEW’s) used in undergraduate electrical engineering laboratories. The IEW’s are used for the design, analysis, and testing of engineering systems. Examples of laboratory experiments and software programs are presented.
Resumo:
Epileptic seizures are due to the pathological collective activity of large cellular assemblies. A better understanding of this collective activity is integral to the development of novel diagnostic and therapeutic procedures. In contrast to reductionist analyses, which focus solely on small-scale characteristics of ictogenesis, here we follow a systems-level approach, which combines both small-scale and larger-scale analyses. Peri-ictal dynamics of epileptic networks are assessed by studying correlation within and between different spatial scales of intracranial electroencephalographic recordings (iEEG) of a heterogeneous group of patients suffering from pharmaco-resistant epilepsy. Epileptiform activity as recorded by a single iEEG electrode is determined objectively by the signal derivative and then subjected to a multivariate analysis of correlation between all iEEG channels. We find that during seizure, synchrony increases on the smallest and largest spatial scales probed by iEEG. In addition, a dynamic reorganization of spatial correlation is observed on intermediate scales, which persists after seizure termination. It is proposed that this reorganization may indicate a balancing mechanism that decreases high local correlation. Our findings are consistent with the hypothesis that during epileptic seizures hypercorrelated and therefore functionally segregated brain areas are re-integrated into more collective brain dynamics. In addition, except for a special sub-group, a highly significant association is found between the location of ictal iEEG activity and the location of areas of relative decrease of localised EEG correlation. The latter could serve as a clinically important quantitative marker of the seizure onset zone (SOZ).
Resumo:
Profiling miRNA expression in cells that directly contribute to human disease pathogenesis is likely to aid the discovery of novel drug targets and biomarkers. However, tissue heterogeneity and the limited amount of human diseased tissue available for research purposes present fundamental difficulties that often constrain the scope and potential of such studies. We established a flow cytometry-based method for isolating pure populations of pathogenic T cells from bronchial biopsy samples of asthma patients, and optimized a high-throughput nano-scale qRT-PCR method capable of accurately measuring 96 miRNAs in as little as 100 cells. Comparison of circulating and airway T cells from healthy and asthmatic subjects revealed asthma-associated and tissue-specific miRNA expression patterns. These results establish the feasibility and utility of investigating miRNA expression in small populations of cells involved in asthma pathogenesis, and set a precedent for application of our nano-scale approach in other human diseases. The microarray data from this study (Figure 7) has been submitted to the NCBI Gene Expression Omnibus (GEO; http://ncbi.nlm.nih.gov/geo) under accession no. GSE31030.
Resumo:
One of the major challenges for a mission to the Jovian system is the radiation tolerance of the spacecraft (S/C) and the payload. Moreover, being able to achieve science observations with high signal to noise ratios (SNR), while passing through the high flux radiation zones, requires additional ingenuity on the part of the instrument provider. Consequently, the radiation mitigation is closely intertwined with the payload, spacecraft and trajectory design, and requires a systems-level approach. This paper presents a design for the Io Volcano Observer (IVO), a Discovery mission concept that makes multiple close encounters with Io while orbiting Jupiter. The mission aims to answer key outstanding questions about Io, especially the nature of its intense active volcanism and the internal processes that drive it. The payload includes narrow-angle and wide-angle cameras (NAC and WAC), dual fluxgate magnetometers (FGM), a thermal mapper (ThM), dual ion and neutral mass spectrometers (INMS), and dual plasma ion analyzers (PIA). The radiation mitigation is implemented by drawing upon experiences from designs and studies for missions such as the Radiation Belt Storm Probes (RBSP) and Jupiter Europa Orbiter (JEO). At the core of the radiation mitigation is IVO's inclined and highly elliptical orbit, which leads to rapid passes through the most intense radiation near Io, minimizing the total ionizing dose (177 krads behind 100 mils of Aluminum with radiation design margin (RDM) of 2 after 7 encounters). The payload and the spacecraft are designed specifically to accommodate the fast flyby velocities (e.g. the spacecraft is radioisotope powered, remaining small and agile without any flexible appendages). The science instruments, which collect the majority of the high-priority data when close to Io and thus near the peak flux, also have to mitigate transient noise in their detectors. The cameras use a combination of shielding and CMOS detectors with extremely fast readout to mi- imize noise. INMS microchannel plate detectors and PIA channel electron multipliers require additional shielding. The FGM is not sensitive to noise induced by energetic particles and the ThM microbolometer detector is nearly insensitive. Detailed SNR calculations are presented. To facilitate targeting agility, all of the spacecraft components are shielded separately since this approach is more mass efficient than using a radiation vault. IVO uses proven radiation-hardened parts (rated at 100 krad behind equivalent shielding of 280 mils of Aluminum with RDM of 2) and is expected to have ample mass margin to increase shielding if needed.