962 resultados para Testing of embedded cores
Resumo:
Los ensayos virtuales de materiales compuestos han aparecido como un nuevo concepto dentro de la industria aeroespacial, y disponen de un vasto potencial para reducir los enormes costes de certificación y desarrollo asociados con las tediosas campañas experimentales, que incluyen un gran número de paneles, subcomponentes y componentes. El objetivo de los ensayos virtuales es sustituir algunos ensayos por simulaciones computacionales con alta fidelidad. Esta tesis es una contribución a la aproximación multiescala desarrollada en el Instituto IMDEA Materiales para predecir el comportamiento mecánico de un laminado de material compuesto dadas las propiedades de la lámina y la intercara. La mecánica de daño continuo (CDM) formula el daño intralaminar a nivel constitutivo de material. El modelo de daño intralaminar se combina con elementos cohesivos para representar daño interlaminar. Se desarrolló e implementó un modelo de daño continuo, y se aplicó a configuraciones simples de ensayos en laminados: impactos de baja y alta velocidad, ensayos de tracción, tests a cortadura. El análisis del método y la correlación con experimentos sugiere que los métodos son razonablemente adecuados para los test de impacto, pero insuficientes para el resto de ensayos. Para superar estas limitaciones de CDM, se ha mejorado la aproximación discreta de elementos finitos enriqueciendo la cinemática para incluir discontinuidades embebidas: el método extendido de los elementos finitos (X-FEM). Se adaptó X-FEM para un esquema explícito de integración temporal. El método es capaz de representar cualitativamente los mecanismos de fallo detallados en laminados. Sin embargo, los resultados muestran inconsistencias en la formulación que producen resultados cuantitativos erróneos. Por último, se ha revisado el método tradicional de X-FEM, y se ha desarrollado un nuevo método para superar sus limitaciones: el método cohesivo X-FEM estable. Las propiedades del nuevo método se estudiaron en detalle, y se concluyó que el método es robusto para implementación en códigos explícitos dinámicos escalables, resultando una nueva herramienta útil para la simulación de daño en composites. Virtual testing of composite materials has emerged as a new concept within the aerospace industry. It presents a very large potential to reduce the large certification costs and the long development times associated with the experimental campaigns, involving the testing of a large number of panels, sub-components and components. The aim of virtual testing is to replace some experimental tests by high-fidelity numerical simulations. This work is a contribution to the multiscale approach developed in Institute IMDEA Materials to predict the mechanical behavior of a composite laminate from the properties of the ply and the interply. Continuum Damage Mechanics (CDM) formulates intraply damage at the the material constitutive level. Intraply CDM is combined with cohesive elements to model interply damage. A CDM model was developed, implemented, and applied to simple mechanical tests of laminates: low and high velocity impact, tension of coupons, and shear deformation. The analysis of the results and the comparison with experiments indicated that the performance was reasonably good for the impact tests, but insuficient in the other cases. To overcome the limitations of CDM, the kinematics of the discrete finite element approximation was enhanced to include mesh embedded discontinuities, the eXtended Finite Element Method (X-FEM). The X-FEM was adapted to an explicit time integration scheme and was able to reproduce qualitatively the physical failure mechanisms in a composite laminate. However, the results revealed an inconsistency in the formulation that leads to erroneous quantitative results. Finally, the traditional X-FEM was reviewed, and a new method was developed to overcome its limitations, the stable cohesive X-FEM. The properties of the new method were studied in detail, and it was demonstrated that the new method was robust and can be implemented in a explicit finite element formulation, providing a new tool for damage simulation in composite materials.
Resumo:
High-quality software, delivered on time and budget, constitutes a critical part of most products and services in modern society. Our government has invested billions of dollars to develop software assets, often to redevelop the same capability many times. Recognizing the waste involved in redeveloping these assets, in 1992 the Department of Defense issued the Software Reuse Initiative. The vision of the Software Reuse Initiative was "To drive the DoD software community from its current "re-invent the software" cycle to a process-driven, domain-specific, architecture-centric, library-based way of constructing software.'' Twenty years after issuing this initiative, there is evidence of this vision beginning to be realized in nonembedded systems. However, virtually every large embedded system undertaken has incurred large cost and schedule overruns. Investigations into the root cause of these overruns implicates reuse. Why are we seeing improvements in the outcomes of these large scale nonembedded systems and worse outcomes in embedded systems? This question is the foundation for this research. The experiences of the Aerospace industry have led to a number of questions about reuse and how the industry is employing reuse in embedded systems. For example, does reuse in embedded systems yield the same outcomes as in nonembedded systems? Are the outcomes positive? If the outcomes are different, it may indicate that embedded systems should not use data from nonembedded systems for estimation. Are embedded systems using the same development approaches as nonembedded systems? Does the development approach make a difference? If embedded systems develop software differently from nonembedded systems, it may mean that the same processes do not apply to both types of systems. What about the reuse of different artifacts? Perhaps there are certain artifacts that, when reused, contribute more or are more difficult to use in embedded systems. Finally, what are the success factors and obstacles to reuse? Are they the same in embedded systems as in nonembedded systems? The research in this dissertation is comprised of a series of empirical studies using professionals in the aerospace and defense industry as its subjects. The main focus has been to investigate the reuse practices of embedded systems professionals and nonembedded systems professionals and compare the methods and artifacts used against the outcomes. The research has followed a combined qualitative and quantitative design approach. The qualitative data were collected by surveying software and systems engineers, interviewing senior developers, and reading numerous documents and other studies. Quantitative data were derived from converting survey and interview respondents' answers into coding that could be counted and measured. From the search of existing empirical literature, we learned that reuse in embedded systems are in fact significantly different from nonembedded systems, particularly in effort in model based development approach and quality where the development approach was not specified. The questionnaire showed differences in the development approach used in embedded projects from nonembedded projects, in particular, embedded systems were significantly more likely to use a heritage/legacy development approach. There was also a difference in the artifacts used, with embedded systems more likely to reuse hardware, test products, and test clusters. Nearly all the projects reported using code, but the questionnaire showed that the reuse of code brought mixed results. One of the differences expressed by the respondents to the questionnaire was the difficulty in reuse of code for embedded systems when the platform changed. The semistructured interviews were performed to tell us why the phenomena in the review of literature and the questionnaire were observed. We asked respected industry professionals, such as senior fellows, fellows and distinguished members of technical staff, about their experiences with reuse. We learned that many embedded systems used heritage/legacy development approaches because their systems had been around for many years, before models and modeling tools became available. We learned that reuse of code is beneficial primarily when the code does not require modification, but, especially in embedded systems, once it has to be changed, reuse of code yields few benefits. Finally, while platform independence is a goal for many in nonembedded systems, it is certainly not a goal for the embedded systems professionals and in many cases it is a detriment. However, both embedded and nonembedded systems professionals endorsed the idea of platform standardization. Finally, we conclude that while reuse in embedded systems and nonembedded systems is different today, they are converging. As heritage embedded systems are phased out, models become more robust and platforms are standardized, reuse in embedded systems will become more like nonembedded systems.
Resumo:
Devido às tendências de crescimento da quantidade de dados processados e a crescente necessidade por computação de alto desempenho, mudanças significativas estão acontecendo no projeto de arquiteturas de computadores. Com isso, tem-se migrado do paradigma sequencial para o paralelo, com centenas ou milhares de núcleos de processamento em um mesmo chip. Dentro desse contexto, o gerenciamento de energia torna-se cada vez mais importante, principalmente em sistemas embarcados, que geralmente são alimentados por baterias. De acordo com a Lei de Moore, o desempenho de um processador dobra a cada 18 meses, porém a capacidade das baterias dobra somente a cada 10 anos. Esta situação provoca uma enorme lacuna, que pode ser amenizada com a utilização de arquiteturas multi-cores heterogêneas. Um desafio fundamental que permanece em aberto para estas arquiteturas é realizar a integração entre desenvolvimento de código embarcado, escalonamento e hardware para gerenciamento de energia. O objetivo geral deste trabalho de doutorado é investigar técnicas para otimização da relação desempenho/consumo de energia em arquiteturas multi-cores heterogêneas single-ISA implementadas em FPGA. Nesse sentido, buscou-se por soluções que obtivessem o melhor desempenho possível a um consumo de energia ótimo. Isto foi feito por meio da combinação de mineração de dados para a análise de softwares baseados em threads aliadas às técnicas tradicionais para gerenciamento de energia, como way-shutdown dinâmico, e uma nova política de escalonamento heterogeneity-aware. Como principais contribuições pode-se citar a combinação de técnicas de gerenciamento de energia em diversos níveis como o nível do hardware, do escalonamento e da compilação; e uma política de escalonamento integrada com uma arquitetura multi-core heterogênea em relação ao tamanho da memória cache L1.
Resumo:
Smart et al. (2014) suggested that the detection of nitrate spikes in polar ice cores from solar energetic particle (SEP) events could be achieved if an analytical system with sufficiently high resolution was used. Here we show that the spikes they associate with SEP events are not reliably recorded in cores from the same location, even when the resolution is clearly adequate. We explain the processes that limit the effective resolution of ice cores. Liquid conductivity data suggest that the observed spikes are associated with sodium or another nonacidic cation, making it likely that they result from deposition of sea salt or similar aerosol that has scavenged nitrate, rather than from a primary input of nitrate in the troposphere. We consider that there is no evidence at present to support the identification of any spikes in nitrate as representing SEP events. Although such events undoubtedly create nitrate in the atmosphere, we see no plausible route to using nitrate spikes to document the statistics of such events.
Resumo:
Two late Quaternary sediment cores from the northern Cape Basin in the eastern South Atlantic Ocean were analyzed for their benthic foraminiferal content and benthic stable carbon isotope composition. The locations of the cores were selected such that both of them presently are bathed by North Atlantic Deep Water (NADW) and past changes in deep water circulation should be recorded simultaneously at both locations. However, the areas are different in terms of primary production. One core was recovered from the nutrient-depleted Walvis Ridge area, whereas the other one is from the continental slope just below the coastal upwelling mixing area where present day organic matter fluxes are shown to be moderately high. Recent data served as the basis for the interpretation of the late Quaternary faunal fluctuations and the paleoceanographic reconstruction. During the last 450,000 years, NADW flux into the eastern South Atlantic Ocean has been restricted to interglacial periods, with the strongest dominance of a NADW-driven deep water circulation during interglacial stages 1, 9 and 11. At the continental margin, high productivity faunas and very low epibenthic d13C values indicate enhanced fluxes of organic matter during glacial periods. This can be attributed to a glacial increase and lateral extension of coastal upwelling. The long term glacial-interglacial paleoproductivity cycles are superimposed by high-frequency variations with a period of about 23,000 yr. Enhanced productivity in surface waters above the Walvis Ridge, far from the coast, is indicated during glacial stages 8, 10 and 12. During these periods, cold, nutrient-rich filaments from the mixing area were probably driven as far as to the southeastern flank of the Walvis Ridge.
Resumo:
We analyse ice cores from Vestfonna ice cap (Nordaustlandet, Svalbard). Oxygen isotopic measurements were made on three firn cores (6.0, 11.0 and 15.5 m deep) from the two highest summits of the glacier located on the SW-NE and NW-SE central ridges. Sub-annual d18O cycles were preserved and could be counted visually in the uppermost parts of the cores, but deeper layers were affected by post-depositional smoothing. A pronounced d18O minimum was found near the bottom of the three cores. We consider candidates for this d18O signal to be a valuable reference horizon since it is also seen elsewhere in Nordaustlandet. We attribute it to isotopically depleted snow precipitation, which NCEP/NCAR reanalysis shows was unusual for Vestfonna, and came from northerly air during the cold winter of 1994/95. Finding the 1994/95 time marker allows establishment of a precise depth/age scale for the three cores. The derived annual accumulation rates indirectly fill a geographical gap in mass balance measurements and thus provide information on spatial and temporal variability of precipitation over the glacier for the period spanned by the cores (1992-2009). Comparing records at the two locations also reveals that the snow net accumulation at the easternmost part of Vestfonna was only half of that in the western part over the last 17 years.
Resumo:
A reconstruction of Holocene sea ice conditions in the Fram Strait provides insight into the palaeoenvironmental and palaeoceanographic development of this climate sensitive area during the past 8,500 years BP. Organic geochemical analyses of sediment cores from eastern and western Fram Strait enable the identification of variations in the ice coverage that can be linked to changes in the oceanic (and atmospheric) circulation system. By means of the sea ice proxy IP25, phytoplankton derived biomarkers and ice rafted detritus (IRD) increasing sea ice occurrences are traced along the western continental margin of Spitsbergen throughout the Holocene, which supports previous palaeoenvironmental reconstructions that document a general cooling. A further significant ice advance during the Neoglacial is accompanied by distinct sea ice fluctuations, which point to short-term perturbations in either the Atlantic Water advection or Arctic Water outflow at this site. At the continental shelf of East Greenland, the general Holocene cooling, however, seems to be less pronounced and sea ice conditions remained rather stable. Here, a major Neoglacial increase in sea ice coverage did not occur before 1,000 years BP. Phytoplankton-IP25 indices ("PIP25-Index") are used for more explicit sea ice estimates and display a Mid Holocene shift from a minor sea ice coverage to stable ice margin conditions in eastern Fram Strait, while the inner East Greenland shelf experienced less severe to marginal sea ice occurrences throughout the entire Holocene.
Resumo:
We present three new benthic foraminiferal delta13C, delta18O, and total organic carbon time series from the eastern Atlantic sector of the Southern Ocean between 41°S and 47°S. The measured glacial delta13C values belong to the lowest hitherto reported. We demonstrate a coincidence between depleted late Holocene (LH) delta13C values and positions of sites relative to ocean surface productivity. A correction of +0.3 to +0.4 [per mil VPDB] for a productivity-induced depletion of Last Glacial Maximum (LGM) benthic delta13C values of these cores is suggested. The new data are compiled with published data from 13 sediment cores from the eastern Atlantic Ocean between 19°S and 47°S, and the regional deep and bottom water circulation is reconstructed for LH (4-0 ka) and LGM (22-16 ka) times. This extends earlier eastern Atlantic-wide synoptic reconstructions which suffered from the lack of data south of 20°S. A conceptual model of LGM deep-water circulation is discussed that, after correction of southernmost cores below the Antarctic Circumpolar Current (ACC) for a productivity-induced artifact, suggests a reduced formation of both North Atlantic Deep Water in the northern Atlantic and bottom water in the southwestern Weddell Sea. This reduction was compensated for by the formation of deep water in the zone of extended winter sea-ice coverage at the northern rim of the Weddell Sea, where air-sea gas exchange was reduced. This shift from LGM deep-water formation in the region south of the ACC to Holocene bottom water formation in the southwestern Weddell Sea, can explain lower preformed d13CDIC values of glacial circumantarctic deep water of approximately 0.3 per mil to 0.4 per mil. Our reconstruction brings Atlantic and Southern Ocean d13C and Cd/Ca data into better agreement, but is in conflict, however, with a scenario of an essentially unchanged thermohaline deep circulation on a global scale. Benthic delta18O-derived LGM bottom water temperatures, by 1.9°C and 0.3°C lower than during the LH at deepest southern and shallowest northern sites, respectively, agree with the here proposed reconstruction of deep-water circulation in the eastern South Atlantic Ocean.
Resumo:
X-ray fluorescence (XRF) scanning of sediment cores from the Lomonosov Ridge and the Morris Jesup Rise reveals a distinct pattern of Ca intensity peaks through Marine Isotope Stages (MIS) 1 to 7. Downcore of MIS 7, the Ca signal is more irregular and near the detection limit. Virtually all major peaks in Ca coincide with a high abundance of calcareous microfossils; this is particularly conspicuous in the cores from the central Arctic Ocean. However, the recorded Ca signal is generally caused by a combination of biogenic and detrital carbonate, and in areas influenced by input from the Canadian Arctic, detrital carbonates may effectively mask the foraminiferal carbonates. Despite this, there is a strong correlation between XRF-detected Ca content and foraminiferal abundance. We propose that in the Arctic Ocean north of Greenland a common palaeoceanographic mechanism is controlling Ca-rich ice-rafted debris (IRD) and foraminiferal abundance. Previous studies have shown that glacial periods are characterized by foraminfer-barren sediments. This implies that the Ca-rich IRD intervals with abundant foraminifera were most likely deposited during interglacial periods when glaciers left in the Canadian Arctic Archipelago were still active and delivered a large amount of icebergs. At the same time, conditions were favourable for planktic foraminifera, resulting in a strong covariance between these proxies. Therefore, we suggest that the XRF scanner's capability to efficiently map Ca concentrations in sediment cores makes it possible to systematically examine large numbers of cores from different regions to investigate the palaeoceanographic reasons for the calcareous microfossils' spatial and temporal variability.
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Introduction: Mutation testing for the MEN1 gene is a useful method to diagnose and predict individuals who either have or will develop multiple endocrine neoplasia type 1 ( MEN 1). Clinical selection criteria to identify patients who should be tested are needed, as mutation analysis is costly and time consuming. This study is a report of an Australian national mutation testing service for the MEN1 gene from referred patients with classical MEN 1 and various MEN 1- like conditions. Results: All 55 MEN1 mutation positive patients had a family history of hyperparathyroidism, had hyperparathyroidism with one other MEN1 related tumour, or had hyperparathyroidism with multiglandular hyperplasia at a young age. We found 42 separate mutations and six recurring mutations from unrelated families, and evidence for a founder effect in five families with the same mutation. Discussion: Our results indicate that mutations in genes other than MEN1 may cause familial isolated hyperparathyroidism and familial isolated pituitary tumours. Conclusions: We therefore suggest that routine germline MEN1 mutation testing of all cases of classical'' MEN1, familial hyperparathyroidism, and sporadic hyperparathyroidism with one other MEN1 related condition is justified by national testing services. We do not recommend routine sequencing of the promoter region between nucleotides 1234 and 1758 ( Genbank accession no. U93237) as we could not detect any sequence variations within this region in any familial or sporadic cases of MEN1 related conditions lacking a MEN1 mutation. We also suggest that testing be considered for patients < 30 years old with sporadic hyperparathyroidism and multigland hyperplasia
Resumo:
This article demonstrates the use of embedded fibre Bragg gratings as vector bending sensor to monitor two-dimensional shape deformation of a shape memory polymer plate. The shape memory polymer plate was made by using thermal-responsive epoxy-based shape memory polymer materials, and the two fibre Bragg grating sensors were orthogonally embedded, one on the top and the other on the bottom layer of the plate, in order to measure the strain distribution in both longitudinal and transverse directions separately and also with temperature reference. When the shape memory polymer plate was bent at different angles, the Bragg wavelengths of the embedded fibre Bragg gratings showed a red-shift of 50 pm/°caused by the bent-induced tensile strain on the plate surface. The finite element method was used to analyse the stress distribution for the whole shape recovery process. The strain transfer rate between the shape memory polymer and optical fibre was also calculated from the finite element method and determined by experimental results, which was around 0.25. During the experiment, the embedded fibre Bragg gratings showed very high temperature sensitivity due to the high thermal expansion coefficient of the shape memory polymer, which was around 108.24 pm/°C below the glass transition temperature (Tg) and 47.29 pm/°C above Tg. Therefore, the orthogonal arrangement of the two fibre Bragg grating sensors could provide a temperature compensation function, as one of the fibre Bragg gratings only measures the temperature while the other is subjected to the directional deformation. © The Author(s) 2013.
Resumo:
Красимир Манев, Антон Желязков, Станимир Бойчев - В статията е представена имплементацията на последната фаза на автоматичен генератор на тестови данни за структурно тестване на софтуер, написан на обектно-ориентиран език за програмиране – генерирането на изходен код на тестващия модул. Някои детайли от имплементацията на останалите фази, които са важни за имплементацията на последната фаза, са представени първо. След това е описан и алгоритъмът за генериране на кода на тестващия модул.
Resumo:
The cores and dredges described in this report were taken during the Vema 16 Expedition from October 1959 until September 1960 by the Lamont Geological Observatory, Columbia University from the R/V Vema. An approximate total of 300 cores, dredges and camera stations were recovered and are available at Lamont-Doherty Earth Observatory for sampling and study.
Resumo:
Bioturbation in marine sediments has basically two aspects of interest for palaeo-environmental studies. First, the traces left by the burrowing organisms reflect the prevailing environmental conditions at the seafloor and thus can be used to reconstruct the ecologic and palaeoceanographic situation. Traces have the advantage over other proxies of practically always being preserved in situ. Secondly, for high- resolution stratigraphy, bioturbation is a nuisance due to the stirring and mixing processes that destroy the stratigraphic record. In order to evaluate the applicability of biogenic traces as palaeoenvironmental indicators, a number of gravity cores from the Portuguese continental slope, covering the period from the last glacial to the present were investigated through X-ray radiographs. In addition, physical and chemical parameters were determined to define the environmental niche in each core interval. A number of traces could be recognized, the most important being: Thalassinoides, Planolites, Zoophycos, Chondrites, Scolicia, Palaeophycus, Phycosiphon and the generally pyritized traces Trichichnus and Mycellia. The shifts between the different ichnofabrics agree strikingly well with the variations in ocean circulation caused by the changing climate. On the upper and middle slope, variations in current intensity and oxygenation of the Mediterranean Outflow Water were responsible for shifts in the ichnofabric. Larger traces such as Planolites and Thalassinoides dominated in coarse, well oxygenated intervals, while small traces such as Chondrites and Trichichnus dominated in fine grained, poorly oxygenated intervals. In contrast, on the lower slope where calm steady sedimentation conditions prevail, changes in sedimentation rate and nutrient flux have controlled variations in the distribution of larger traces such as Planolites, Thalassinoides, and Palaeophycus. Additionally, distinct layers of abundant Chondrites correspond to Heinrich events 1, 2, and 4, and are interpreted as a response to incursions of nutrient rich, oxygen depleted Antarctic waters during phases of reduced thermohaline circulation. The results clearly show that not one single factor but a combination of several factors is necessary to explain the changes in ichnofabric. Furthermore, large variations in the extent and type of bioturbation and tiering between different settings clearly show that a more detailed knowledge of the factors governing bioturbation is necessary if we shall fully comprehend how proxy records are disturbed. A first attempt to automatize a part of the recognition and quantification of the ichnofabric was performed using the DIAna image analysis program on digitized X-ray radiographs. The results show that enhanced abundance of pyritized microburrows appears to be coupled to organic rich sediments deposited under dysoxic conditions. Coarse grained sediments inhibit the formation of pyritized burrows. However, the smallest changes in program settings controlling the grey scale threshold and the sensitivity resulted in large shifts in the number of detected burrows. Therefore, this method can only be considered to be semi-quantitative. Through AMS-^C dating of sample pairs from the Zoophycos spreiten and the surrounding host sediment, age reversals of up to 3,320 years could be demonstrated for the first time. The spreiten material is always several thousands of years younger than the surrounding host sediment. Together with detailed X-ray radiograph studies this shows that the trace maker collects the material on the seafloor, and then transports it downwards up to more than one meter in to the underlying sediment where it is deposited in distinct structures termed spreiten. This clearly shows that age reversals of several thousands of years can be expected whenever Zoophycos is unknowingly sampled. These results also render the hitherto proposed ethological models proposed for Zoophycos as largely implausible. Therefore, a combination of detritus feeding, short time caching, and hibernation possibly combined also with gardening, is suggested here as an explanation for this complicated burrow.