726 resultados para Fault-tolerant computing
Resumo:
The Alhama de Murcia fault is a 85 km long oblique-slip fault, and is related to historical and instrumental seismic activity. A paleoseismic analysis of the Lorca-Totana sector of the fault containing MSK I=VIII historical earthquakes was made in order to identify and quantify its seismic potential. We present 1) the results of the neotectonic, structural and geomorphological analyses and, 2) the results of trenching. In the study area, the Alhama de Murcia fault forms a depressed corridor between two strands, the northwestern fault with morphological and structural features of a reverse component of slip, bounding the La Tercia range to the South, and the southeastern fault strand with evidence of sinistral oblique strike-slip movement. The offset along this latter fault trapped the sediments in transit from the La Tercia range towards the Guadalentín depression. The most recent of these sediments are arranged in three generations of alluvial fans and terraces. The first two trenches were dug in the most recent sediments across the southeastern fault strand. The results indicate a coseismic reverse fault deformation that involved the sedimentary sequence up to the intermediate alluvial fan and the Holocene terrace deposits. The sedimentary evolution observed in the trenches suggests an event of temporary damming of the Colmenar creek drainage to the South due to uplifting of the hanging wall during coseismic activation of the fault. Trench, structural and sedimentological features provide evidence of at least three coseismic events, which occurred after 125,000 yr. The minimum vertical slip rate along the fault is 0.06 mm/yr and the average recurrence period should not exceed 40,000 yr in accordance with the results obtained by fan topographic profiling. Further absolute dating is ongoing to constrain these estimates.
Resumo:
We present an overview of the knowledge of the structure and the seismic behavior of the Alhama de Murcia Fault (AMF). We utilize a fault traces map created from a LIDAR DEM combined with the geodynamic setting, the analysis of the morphology, the distribution of seismicity, the geological information from E 1:50000 geological maps and the available paleoseismic data to describe the recent activity of the AMF. We discuss the importance of uncertainties regarding the structure and kinematics of the AMF applied to the interpretation and spatial correlation of the paleoseismic data. In particular, we discuss the nature of the faults dipping to the SE (antithetic to the main faults of the AMF) in several segments that have been studied in the previous paleoseismic works. A special chapter is dedicated to the analysis of the tectonic source of the Lorca 2011 earthquake that took place in between two large segments of the fault.
Resumo:
One of the techniques used to detect faults in dynamic systems is analytical redundancy. An important difficulty in applying this technique to real systems is dealing with the uncertainties associated with the system itself and with the measurements. In this paper, this uncertainty is taken into account by the use of intervals for the parameters of the model and for the measurements. The method that is proposed in this paper checks the consistency between the system's behavior, obtained from the measurements, and the model's behavior; if they are inconsistent, then there is a fault. The problem of detecting faults is stated as a quantified real constraint satisfaction problem, which can be solved using the modal interval analysis (MIA). MIA is used because it provides powerful tools to extend the calculations over real functions to intervals. To improve the results of the detection of the faults, the simultaneous use of several sliding time windows is proposed. The result of implementing this method is semiqualitative tracking (SQualTrack), a fault-detection tool that is robust in the sense that it does not generate false alarms, i.e., if there are false alarms, they indicate either that the interval model does not represent the system adequately or that the interval measurements do not represent the true values of the variables adequately. SQualTrack is currently being used to detect faults in real processes. Some of these applications using real data have been developed within the European project advanced decision support system for chemical/petrochemical manufacturing processes and are also described in this paper
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
Valmistustekniikoiden kehittyessä IC-piireille saadaan mahtumaan yhä enemmän transistoreja. Monimutkaisemmat piirit mahdollistavat suurempien laskutoimitusmäärien suorittamisen aikayksikössä. Piirien aktiivisuuden lisääntyessä myös niiden energiankulutus lisääntyy, ja tämä puolestaan lisää piirin lämmöntuotantoa. Liiallinen lämpö rajoittaa piirien toimintaa. Tämän takia tarvitaan tekniikoita, joilla piirien energiankulutusta saadaan pienennettyä. Uudeksi tutkimuskohteeksi ovat tulleet pienet laitteet, jotka seuraavat esimerkiksi ihmiskehon toimintaa, rakennuksia tai siltoja. Tällaisten laitteiden on oltava energiankulutukseltaan pieniä, jotta ne voivat toimia pitkiä aikoja ilman akkujen lataamista. Near-Threshold Computing on tekniikka, jolla pyritään pienentämään integroitujen piirien energiankulutusta. Periaatteena on käyttää piireillä pienempää käyttöjännitettä kuin piirivalmistaja on niille alunperin suunnitellut. Tämä hidastaa ja haittaa piirin toimintaa. Jos kuitenkin laitteen toiminnassa pystyään hyväksymään huonompi laskentateho ja pienentynyt toimintavarmuus, voidaan saavuttaa säästöä energiankulutuksessa. Tässä diplomityössä tarkastellaan Near-Threshold Computing -tekniikkaa eri näkökulmista: aluksi perustuen kirjallisuudesta löytyviin aikaisempiin tutkimuksiin, ja myöhemmin tutkimalla Near-Threshold Computing -tekniikan soveltamista kahden tapaustutkimuksen kautta. Tapaustutkimuksissa tarkastellaan FO4-invertteriä sekä 6T SRAM -solua piirisimulaatioiden avulla. Näiden komponenttien käyttäytymisen Near-Threshold Computing –jännitteillä voidaan tulkita antavan kattavan kuvan suuresta osasta tavanomaisen IC-piirin pinta-alaa ja energiankulusta. Tapaustutkimuksissa käytetään 130 nm teknologiaa, ja niissä mallinnetaan todellisia piirivalmistusprosessin tuotteita ajamalla useita Monte Carlo -simulaatioita. Tämä valmistuskustannuksiltaan huokea teknologia yhdistettynä Near-Threshold Computing -tekniikkaan mahdollistaa matalan energiankulutuksen piirien valmistaminen järkevään hintaan. Tämän diplomityön tulokset näyttävät, että Near-Threshold Computing pienentää piirien energiankulutusta merkittävästi. Toisaalta, piirien nopeus heikkenee, ja yleisesti käytetty 6T SRAM -muistisolu muuttuu epäluotettavaksi. Pidemmät polut logiikkapiireissä sekä transistorien kasvattaminen muistisoluissa osoitetaan tehokkaiksi vastatoimiksi Near- Threshold Computing -tekniikan huonoja puolia vastaan. Tulokset antavat perusteita matalan energiankulutuksen IC-piirien suunnittelussa sille, kannattaako käyttää normaalia käyttöjännitettä, vai laskea sitä, jolloin piirin hidastuminen ja epävarmempi käyttäytyminen pitää ottaa huomioon.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
This thesis discusses the opportunities and challenges of the cloud computing technology in healthcare information systems by reviewing the existing literature on cloud computing and healthcare information system and the impact of cloud computing technology to healthcare industry. The review shows that if problems related to security of data are solved then cloud computing will positively transform the healthcare institutions by giving advantage to the healthcare IT infrastructure as well as improving and giving benefit to healthcare services. Therefore, this thesis will explore the opportunities and challenges that are associated with cloud computing in the context of Finland in order to help the healthcare organizations and stakeholders to determine its direction when it decides to adopt cloud technology on their information systems.
Resumo:
Multiprocessor system-on-chip (MPSoC) designs utilize the available technology and communication architectures to meet the requirements of the upcoming applications. In MPSoC, the communication platform is both the key enabler, as well as the key differentiator for realizing efficient MPSoCs. It provides product differentiation to meet a diverse, multi-dimensional set of design constraints, including performance, power, energy, reconfigurability, scalability, cost, reliability and time-to-market. The communication resources of a single interconnection platform cannot be fully utilized by all kind of applications, such as the availability of higher communication bandwidth for computation but not data intensive applications is often unfeasible in the practical implementation. This thesis aims to perform the architecture-level design space exploration towards efficient and scalable resource utilization for MPSoC communication architecture. In order to meet the performance requirements within the design constraints, careful selection of MPSoC communication platform, resource aware partitioning and mapping of the application play important role. To enhance the utilization of communication resources, variety of techniques such as resource sharing, multicast to avoid re-transmission of identical data, and adaptive routing can be used. For implementation, these techniques should be customized according to the platform architecture. To address the resource utilization of MPSoC communication platforms, variety of architectures with different design parameters and performance levels, namely Segmented bus (SegBus), Network-on-Chip (NoC) and Three-Dimensional NoC (3D-NoC), are selected. Average packet latency and power consumption are the evaluation parameters for the proposed techniques. In conventional computing architectures, fault on a component makes the connected fault-free components inoperative. Resource sharing approach can utilize the fault-free components to retain the system performance by reducing the impact of faults. Design space exploration also guides to narrow down the selection of MPSoC architecture, which can meet the performance requirements with design constraints.
Resumo:
Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.
Resumo:
The objective of this study was to evaluate glyphosate translocation in glyphosate-tolerant weed species (I. nil, T. procumbens and S. latifolia) compared to glyphosate-susceptible species (B. pilosa). The evaluations of 14C-glyphosate absorption and translocation were performed at 6, 12, 36 and 72 hours after treatment (HAT) in I. nil and B. pilosa, and only at 72 HAT in the species T. procumbens and S. latifolia. The plants were collected and fractionated into application leaf, other leaves, stems, and roots. In S. latifolia, approximately 88% of the glyphosate remained in the application leaf and a small amount was translocated to roots at 72 HAT. However, 75% of the herbicide applied on T. procumbens remained in the leaf that had received the treatment, with greater glyphosate translocation to the floral bud. It was concluded that the smaller amount of glyphosate observed in S. latifolia and T. procumbens may partly account for their higher tolerance to glyphosate. However, I. nil tolerance to glyphosate may be associated with other factors such as metabolization, root exudation or compartmentalization, because a large amount of the herbicide reached the roots of this species.
Resumo:
The objective of this study was to evaluate the competitiveness of two cultivars of upland rice drought-tolerant, cultured in coexistence with weed S. verticillata, under conditions of absence and presence of water stress. The experiment was conducted in a greenhouse at the Experimental Station of the Universidade Federal de Tocantins, Gurupi-TO Campus. The experimental design was completely randomized in a factorial 2 x 2 x 4 with four replications. The treatments consisted of two rice cultivars under two water conditions and four densities. At 57 days after emergence, were evaluated in rice cultivars and weed S. verticillata leaf area, dry weight of roots and shoots and total concentration and depth of roots. Was also evaluated in rice cultivars, plant height and number of tillers. Water stress caused a reduction in leaf area, the concentration of roots and vegetative components of dry matter (APDM, and MSR MST) of rice cultivars and Jatoba Catetão and weed S. verticillata. The competition established by the presence of the weed provided reduction of all vegetative components (MSPA, and MSR MST) of cultivars and Jatoba Catetão. It also decreased the number of tillers, the concentration of roots and leaf area. At the highest level of weed competition with rice cultivars, a greater decrease in vegetative components and leaf area of culture, regardless of water conditions.
Resumo:
The loss of grains during the harvest of glyphosate tolerant corn may generate volunteer plants, which can interfere in the conventional or glyphosate crop in succession. The current work aim to evaluate the control of the volunteer corn glyphosate tolerant under two weed stages. Aimed to evaluate the control of volunteer glyphosate tolerant corn in two stages of development. There were conducted two experiments with hybrid 2B688 HR (lepidoptera and glyphosate tolerant), the application were at V5 and V8 stage. The experiment was randomized block design with four replicates, using the treatments: haloxyfop at 25, 50 and 62 g ha-1 alone and associated with 2,4-D at 670 g ha-1 or fluroxypyr at 200 g ha-1. The standard was clethodim at 84 g ha-1 with 2,4-D and fluroxypyr at same rates. The applications of haloxyfop and clethodim both isolated or in a mixture with 2,4-D and fluroxypyr at V5 stage showed total control (100%) at 32 and 39 days after the application, except for haloxyfop + 2,4-D (25 + 670 g ha-1) mixture, which did not provided adequate control. At V8 stage, haloxyfop + 2,4-D (50 + 670 g ha-1) and haloxyfop + 2,4-D (62 + 670 g ha-1) mixtures took up to 6 and 10 days or longer to reach adequate to excellent control, when compared to haloxyfop isolated applications in the same doses, respectively. Either isolated clethodim or mixed with 2, 4-D and fluroxypyr did not show adequate control. The treatments showed efficient control on volunteer corn plants at V5 stage, except for haloxyfop + 2, 4-D (25 + 670 g ha-1) mixture. At V8 stage applications, haloxyfop either isolated or mixture with fluroxypyr demonstrated excellent control on every evaluated dose. The mixture with 2, 4-D can reduce haloxyfop efficiency at low doses. Clethodim alone or mixed with 2,4-D or furoxypyr did not provide acceptable level of control.
Resumo:
Herbicides used in Clearfield(r) rice system may persist in the environment, damaging non-tolerant crops sown in succession and/or rotation. These damages vary according to soil characteristics, climate and soil management. The thickness of the soil profile may affect carryover effect; deeper soils may allow these molecules to leach, reaching areas below the roots absorption zone. The aim of this study was to evaluate the effect of the thickness of soil profile in the carryover of imazethapyr + imazapic on ryegrass and non-tolerant rice, sown in succession and rotation to rice, respectively. Lysimeters of different thicknesses (15, 20, 30, 40, 50 and 65 cm) were constructed, where 1 L ha-1 of the imazethapyr + imazapic formulated mixture was applied in tolerant rice. Firstly, imidazolinone-tolerant rice was planted, followed by ryegrass and non-tolerant rice in succession and rotation, respectively. Herbicide injury, height reduction and dry weight of non-tolerant species were assessed. There was no visual symptoms of herbicide injury on ryegrass sown 128 days after the herbicide application; however it causes dry weight mass reduction of plants. The herbicides persist in the soil and cause injury in non-tolerant rice, sown 280 days after application, and the deeper the soil profile, the lower the herbicides injury on irrigated rice.