929 resultados para Source analysis
Resumo:
Existing instrumental techniques must be adaptable to the analysis of novel explosives if science is to keep up with the practices of terrorists and criminals. The focus of this work has been the development of analytical techniques for the analysis of two types of novel explosives: ascorbic acid-based propellants, and improvised mixtures of concentrated hydrogen peroxide/fuel. In recent years, the use of these explosives in improvised explosive devices (IEDs) has increased. It is therefore important to develop methods which permit the identification of the nature of the original explosive from post-blast residues. Ascorbic acid-based propellants are low explosives which employ an ascorbic acid fuel source with a nitrate/perchlorate oxidizer. A method which utilized ion chromatography with indirect photometric detection was optimized for the analysis of intact propellants. Post-burn and post-blast residues if these propellants were analyzed. It was determined that the ascorbic acid fuel and nitrate oxidizer could be detected in intact propellants, as well as in the post-burn and post-blast residues. Degradation products of the nitrate and perchlorate oxidizers were also detected. With a quadrupole time-of-flight mass spectrometer (QToFMS), exact mass measurements are possible. When an HPLC instrument is coupled to a QToFMS, the combination of retention time with accurate mass measurements, mass spectral fragmentation information, and isotopic abundance patterns allows for the unequivocal identification of a target analyte. An optimized HPLC-ESI-QToFMS method was applied to the analysis of ascorbic acid-based propellants. Exact mass measurements were collected for the fuel and oxidizer anions, and their degradation products. Ascorbic acid was detected in the intact samples and half of the propellants subjected to open burning; the intact fuel molecule was not detected in any of the post-blast residue. Two methods were optimized for the analysis of trace levels of hydrogen peroxide: HPLC with fluorescence detection (HPLC-FD), and HPLC with electrochemical detection (HPLC-ED). Both techniques were extremely selective for hydrogen peroxide. Both methods were applied to the analysis of post-blast debris from improvised mixtures of concentrated hydrogen peroxide/fuel; hydrogen peroxide was detected on variety of substrates. Hydrogen peroxide was detected in the post-blast residues of the improvised explosives TATP and HMTD.
Resumo:
The primary purpose of this thesis was to present a theoretical large-signal analysis to study the power gain and efficiency of a microwave power amplifier for LS-band communications using software simulation. Power gain, efficiency, reliability, and stability are important characteristics in the power amplifier design process. These characteristics affect advance wireless systems, which require low-cost device amplification without sacrificing system performance. Large-signal modeling and input and output matching components are used for this thesis. Motorola's Electro Thermal LDMOS model is a new transistor model that includes self-heating affects and is capable of small-large signal simulations. It allows for most of the design considerations to be on stability, power gain, bandwidth, and DC requirements. The matching technique allows for the gain to be maximized at a specific target frequency. Calculations and simulations for the microwave power amplifier design were performed using Matlab and Microwave Office respectively. Microwave Office is the simulation software used in this thesis. The study demonstrated that Motorola's Electro Thermal LDMOS transistor in microwave power amplifier design process is a viable solution for common-source amplifier applications in high power base stations. The MET-LDMOS met the stability requirements for the specified frequency range without a stability-improvement model. The power gain of the amplifier circuit was improved through proper microwave matching design using input/output-matching techniques. The gain and efficiency of the amplifier improve approximately 4dB and 7.27% respectively. The gain value is roughly .89 dB higher than the maximum gain specified by the MRF21010 data sheet specifications. This work can lead to efficient modeling and development of high power LDMOS transistor implementations in commercial and industry applications.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
A manutenção e evolução de sistemas de software tornou-se uma tarefa bastante crítica ao longo dos últimos anos devido à diversidade e alta demanda de funcionalidades, dispositivos e usuários. Entender e analisar como novas mudanças impactam os atributos de qualidade da arquitetura de tais sistemas é um pré-requisito essencial para evitar a deterioração de sua qualidade durante sua evolução. Esta tese propõe uma abordagem automatizada para a análise de variação do atributo de qualidade de desempenho em termos de tempo de execução (tempo de resposta). Ela é implementada por um framework que adota técnicas de análise dinâmica e mineração de repositório de software para fornecer uma forma automatizada de revelar fontes potenciais – commits e issues – de variação de desempenho em cenários durante a evolução de sistemas de software. A abordagem define quatro fases: (i) preparação – escolher os cenários e preparar os releases alvos; (ii) análise dinâmica – determinar o desempenho de cenários e métodos calculando seus tempos de execução; (iii) análise de variação – processar e comparar os resultados da análise dinâmica para releases diferentes; e (iv) mineração de repositório – identificar issues e commits associados com a variação de desempenho detectada. Estudos empíricos foram realizados para avaliar a abordagem de diferentes perspectivas. Um estudo exploratório analisou a viabilidade de se aplicar a abordagem em sistemas de diferentes domínios para identificar automaticamente elementos de código fonte com variação de desempenho e as mudanças que afetaram tais elementos durante uma evolução. Esse estudo analisou três sistemas: (i) SIGAA – um sistema web para gerência acadêmica; (ii) ArgoUML – uma ferramenta de modelagem UML; e (iii) Netty – um framework para aplicações de rede. Outro estudo realizou uma análise evolucionária ao aplicar a abordagem em múltiplos releases do Netty, e dos frameworks web Wicket e Jetty. Nesse estudo foram analisados 21 releases (sete de cada sistema), totalizando 57 cenários. Em resumo, foram encontrados 14 cenários com variação significante de desempenho para Netty, 13 para Wicket e 9 para Jetty. Adicionalmente, foi obtido feedback de oito desenvolvedores desses sistemas através de um formulário online. Finalmente, no último estudo, um modelo de regressão para desempenho foi desenvolvido visando indicar propriedades de commits que são mais prováveis a causar degradação de desempenho. No geral, 997 commits foram minerados, sendo 103 recuperados de elementos de código fonte degradados e 19 de otimizados, enquanto 875 não tiveram impacto no tempo de execução. O número de dias antes de disponibilizar o release e o dia da semana se mostraram como as variáveis mais relevantes dos commits que degradam desempenho no nosso modelo. A área de característica de operação do receptor (ROC – Receiver Operating Characteristic) do modelo de regressão é 60%, o que significa que usar o modelo para decidir se um commit causará degradação ou não é 10% melhor do que uma decisão aleatória.
Resumo:
Thermal analysis of electronic devices is one of the most important steps for designing of modern devices. Precise thermal analysis is essential for designing an effective thermal management system of modern electronic devices such as batteries, LEDs, microelectronics, ICs, circuit boards, semiconductors and heat spreaders. For having a precise thermal analysis, the temperature profile and thermal spreading resistance of the device should be calculated by considering the geometry, property and boundary conditions. Thermal spreading resistance occurs when heat enters through a portion of a surface and flows by conduction. It is the primary source of thermal resistance when heat flows from a tiny heat source to a thin and wide heat spreader. In this thesis, analytical models for modeling the temperature behavior and thermal resistance in some common geometries of microelectronic devices such as heat channels and heat tubes are investigated. Different boundary conditions for the system are considered. Along the source plane, a combination of discretely specified heat flux, specified temperatures and adiabatic condition are studied. Along the walls of the system, adiabatic or convective cooling boundary conditions are assumed. Along the sink plane, convective cooling with constant or variable heat transfer coefficient are considered. Also, the effect of orthotropic properties is discussed. This thesis contains nine chapters. Chapter one is the introduction and shows the concepts of thermal spreading resistance besides the originality and importance of the work. Chapter two reviews the literatures on the thermal spreading resistance in the past fifty years with a focus on the recent advances. In chapters three and four, thermal resistance of a twodimensional flux channel with non-uniform convection coefficient in the heat sink plane is studied. The non-uniform convection is modeled by using two functions than can simulate a wide variety of different heat sink configurations. In chapter five, a non-symmetrical flux channel with different heat transfer coefficient along the right and left edges and sink plane is analytically modeled. Due to the edge cooling and non-symmetry, the eigenvalues of the system are defined using the heat transfer coefficient on both edges and for satisfying the orthogonality condition, a normalized function is calculated. In chapter six, thermal behavior of two-dimensional rectangular flux channel with arbitrary boundary conditions on the source plane is presented. The boundary condition along the source plane can be a combination of the first kind boundary condition (Dirichlet or prescribed temperature) and the second kind boundary condition (Neumann or prescribed heat flux). The proposed solution can be used for modeling the flux channels with numerous different source plane boundary conditions without any limitations in the number and position of heat sources. In chapter seven, temperature profile of a circular flux tube with discretely specified boundary conditions along the source plane is presented. Also, the effect of orthotropic properties are discussed. In chapter 8, a three-dimensional rectangular flux channel with a non-uniform heat convection along the heat sink plane is analytically modeled. In chapter nine, a summary of the achievements is presented and some systems are proposed for the future studies. It is worth mentioning that all the models and case studies in the thesis are compared with the Finite Element Method (FEM).
Resumo:
The known moss flora of Terra Nova National Park, eastern Newfoundland, comp~ises 210 species. Eighty-two percent of the moss species occurring in Terra Nova are widespread or widespread-sporadic in Newfoundland. Other Newfoundland distributional elements present in the Terra Nova moss flora are the northwestern, southern, southeastern, and disjunct elements, but four of the mosses occurring in Terra Nova appear to belong to a previously unrecognized northeastern element of the Newfoundland flora. The majority (70.9%) of Terra Nova's mosses are of boreal affinity and are widely distributed in the North American coniferous forest belt. An additional 10.5 percent of the Terra Nova mosses are cosmopolitan while 9.5 percent are temperate and 4.8 percent are arctic-montane species. The remaining 4.3 percent of the mosses are of montane affinity, and disjunct between eastern and western North America. In Terra Nova, temperate species at their northern limit are concentrated in balsam fir stands, while arctic-montane species are restricted to exposed cliffs, scree slopes, and coastal exposures. Montane species are largely confined to exposed or freshwater habitats. Inability to tolerate high summer temperatures limits the distributions of both arctic-montane and montane species. In Terra Nova, species of differing phytogeographic affinities co-occur on cliffs and scree slopes. The microhabitat relationships of five selected species from such habitats were evaluated by Discriminant Functions Analysis and Multiple Regression Analysis. The five mosses have distinct and different microhabitats on cliffs and scree slopes in Terra Nova, and abundance of all but one is associated with variation in at least one microhabitat variable. Micro-distribution of Grimmia torquata, an arctic-montane species at its southern limit, appears to be deterJ]lined by sensitivity to high summer temperatures. Both southern mosses at their northern limit (Aulacomnium androgynum, Isothecium myosuroides) appear to be limited by water availability and, possibly, by low winter temperatures. The two species whose distributions extend both north and south or the study area (Encalypta procera, Eurhynchium pulchellum) show no clear relationship with microclimate. Dispersal factors have played a significant role in the development of the Terra Nova moss flora. Compared to the most likely colonizing source (i .e. the rest of the island of Newfoundland), species with small diaspores have colonized the study area to a proportionately much greater extent than have species with large diaspores. Hierarchical log-linear analysis indicates that this is so for all affinity groups present in Terra Nova. The apparent dispersal effects emphasize the comparatively recent glaciation of the area, and may also have been enhanced by anthropogenic influences. The restriction of some species to specific habitats, or to narrowly defined microhabitats, appears to strengthen selection for easily dispersed taxa.
Resumo:
Detailed knowledge of the extent of post-genetic modifications affecting shallow submarine hydrocarbons fueled from the deep subsurface is fundamental for evaluating source and reservoir properties. We investigated gases from a submarine high-flux seepage site in the anoxic Eastern Black Sea in order to elucidate molecular and isotopic alterations of low-molecular-weight hydrocarbons (LMWHC) associated with upward migration through the sediment and precipitation of shallow gas hydrates. For this, near-surface sediment pressure cores and free gas venting from the seafloor were collected using autoclave technology at the Batumi seep area at 845 m water depth within the gas hydrate stability zone. Vent gas, gas from pressure core degassing, and from hydrate dissociation were strongly dominated by methane (>99.85 mol.% of Sum[C1-C4, CO2]). Molecular ratios of LMWHC (C1/[C2 + C3] > 1000) and stable isotopic compositions of methane (d13C = -53.5 per mill V-PDB; D/H around -175 per mill SMOW) indicated predominant microbial methane formation. C1/C2+ ratios and stable isotopic compositions of LMWHC distinguished three gas types prevailing in the seepage area. Vent gas discharged into bottom waters was depleted in methane by >0.03 mol.% (Sum[C1-C4, CO2]) relative to the other gas types and the virtual lack of 14C-CH4 indicated a negligible input of methane from degradation of fresh organic matter. Of all gas types analyzed, vent gas was least affected by molecular fractionation, thus, its origin from the deep subsurface rather than from decomposing hydrates in near-surface sediments is likely. As a result of the anaerobic oxidation of methane, LMWHC in pressure cores in top sediments included smaller methane fractions [0.03 mol.% Sum(C1-C4, CO2)] than gas released from pressure cores of more deeply buried sediments, where the fraction of methane was maximal due to its preferential incorporation in hydrate lattices. No indications for stable carbon isotopic fractionations of methane during hydrate crystallization from vent gas were found. Enrichments of 14C-CH4 (1.4 pMC) in short cores relative to lower abundances (max. 0.6 pMC) in gas from long cores and gas hydrates substantiates recent methanogenesis utilizing modern organic matter deposited in top sediments of this high-flux hydrocarbon seep area.
Resumo:
Vodyanitskii mud volcano is located at a depth of about 2070 m in the Sorokin Trough, Black sea. It is a 500-m wide and 20-m high cone surrounded by a depression, which is typical of many mud volcanoes in the Black Sea. 75 kHz sidescan sonar show different generations of mud flows that include mud breccia, authigenic carbonates, and gas hydrates that were sampled by gravity coring. The fluids that flow through or erupt with the mud are enriched in chloride (up to 650 mmol L**-1 at 150-cm sediment depth) suggesting a deep source, which is similar to the fluids of the close-by Dvurechenskii mud volcano. Direct observation with the remotely operated vehicle Quest revealed gas bubbles emanating at two distinct sites at the crest of the mud volcano, which confirms earlier observations of bubble-induced hydroacoustic anomalies in echosounder records. The sediments at the main bubble emission site show a thermal anomaly with temperatures at 60 cm sediment depth that were 0.9 °C warmer than the bottom water. Chemical and isotopic analyses of the emanated gas revealed that it consisted primarily of methane (99.8%) and was of microbial origin (dD-CH4 = -170.8 per mil (SMOW), d13C-CH4 = -61.0 per mil (V-PDB), d13C-C2H6 = -44.0 per mil (V-PDB)). The gas flux was estimated using the video observations of the ROV. Assuming that the flux is constant with time, about 0.9 ± 0.5 x 10**6 mol of methane is released every year. This value is of the same order-of-magnitude as reported fluxes of dissolved methane released with pore water at other mud volcanoes. This suggests that bubble emanation is a significant pathway transporting methane from the sediments into the water column.
Resumo:
Multivariate statistical analysis on the kaolinite/chlorite ratios from 20 South Atlantic sediment cores allowed for the extraction of two processes controlling the fluctuations of the kaolinite/chlorite ratio during the last 130,000 yrs, (1) the relative strength of North Atlantic Deep Water (NADW) inflow into the South Atlantic Ocean and (2) the influx of aeolian sediments from the south African continent. The NADW fluctuation can be traced in the entire deep South Atlantic while the dust signal is restricted to the vicinity of South Africa. Our data indicate that NADW formation underwent significant changes in response to glacial/interglacial climate changes with enhanced export to the Southern Hemisphere during interglacials. The most pronounced phases with Enhanced South African Dust Export (ESADE) occurred during cold Marine Isotope Stage (MIS) 5d and across the Late Glacial/Holocene transition from 16 ka to 4 ka (MIS 2 to 1). This particular pattern is attributed to the interaction of Antarctic Sea Ice extent, the position of the westerlies and the South African monsoon system.
Resumo:
Software bug analysis is one of the most important activities in Software Quality. The rapid and correct implementation of the necessary repair influence both developers, who must leave the fully functioning software, and users, who need to perform their daily tasks. In this context, if there is an incorrect classification of bugs, there may be unwanted situations. One of the main factors to be assigned bugs in the act of its initial report is severity, which lives up to the urgency of correcting that problem. In this scenario, we identified in datasets with data extracted from five open source systems (Apache, Eclipse, Kernel, Mozilla and Open Office), that there is an irregular distribution of bugs with respect to existing severities, which is an early sign of misclassification. In the dataset analyzed, exists a rate of about 85% bugs being ranked with normal severity. Therefore, this classification rate can have a negative influence on software development context, where the misclassified bug can be allocated to a developer with little experience to solve it and thus the correction of the same may take longer, or even generate a incorrect implementation. Several studies in the literature have disregarded the normal bugs, working only with the portion of bugs considered severe or not severe initially. This work aimed to investigate this portion of the data, with the purpose of identifying whether the normal severity reflects the real impact and urgency, to investigate if there are bugs (initially classified as normal) that could be classified with other severity, and to assess if there are impacts for developers in this context. For this, an automatic classifier was developed, which was based on three algorithms (Näive Bayes, Max Ent and Winnow) to assess if normal severity is correct for the bugs categorized initially with this severity. The algorithms presented accuracy of about 80%, and showed that between 21% and 36% of the bugs should have been classified differently (depending on the algorithm), which represents somewhere between 70,000 and 130,000 bugs of the dataset.
Resumo:
The advent of the Auger Engineering Radio Array (AERA) necessitates the development of a powerful framework for the analysis of radio measurements of cosmic ray air showers. As AERA performs "radio-hybrid" measurements of air shower radio emission in coincidence with the surface particle detectors and fluorescence telescopes of the Pierre Auger Observatory, the radio analysis functionality had to be incorporated in the existing hybrid analysis solutions for fluorescence and surface detector data. This goal has been achieved in a natural way by extending the existing Auger Offline software framework with radio functionality. In this article, we lay out the design, highlights and features of the radio extension implemented in the Auger Offline framework. Its functionality has achieved a high degree of sophistication and offers advanced features such as vectorial reconstruction of the electric field, advanced signal processing algorithms, a transparent and efficient handling of FFTs, a very detailed simulation of detector effects, and the read-in of multiple data formats including data from various radio simulation codes. The source code of this radio functionality can be made available to interested parties on request. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
CSK and RH are funded by National Institute for Health Research Academic Clinical Fellowships. This study was supported by a grant from the North Staffs Heart Committee.
Resumo:
CSK and RH are funded by National Institute for Health Research Academic Clinical Fellowships. This study was supported by a grant from the North Staffs Heart Committee.
Resumo:
Background: Recent morpho-functional evidences pointed out that abnormalities in the thalamus could play a major role in the expression of migraine neurophysiological and clinical correlates. Whether this phenomenon is primary or secondary to its functional disconnection from the brain stem remains to be determined.Aim: We used a Functional Source Separation algorithmof EEG signal to extract the activity of the different neuronal pools recruited at different latencies along the somatosensory pathway in interictal migraine without aura(MO) patients. Method: Twenty MO patients and 20 healthy volunteers(HV) underwent EEG recording. Four ad-hoc functional constraints, two sub-cortical (FS14 at brain stem andFS16 at thalamic level) and two cortical (FS20 radial andFS22 tangential parietal sources), were used to extract the activity of successive stages of somatosensory information processing in response to the separate left and right median nerve electric stimulation. A band-pass digital filter (450–750 Hz) was applied offline in order to extract high-frequency oscillatory (HFO) activity from the broadband EEG signal. Results: In both stimulated sides, significant reduced subcortical brain stem (FS14) and thalamic (FS16) HFO activations characterized MO patients when compared with HV. No difference emerged in the two cortical HFO activations between two groups. Conclusion: Present results are the first neurophysiological evidence supporting the hypothesis that a functional disconnection of the thalamus from the subcortical monoaminergicsystem may underline the interictal cortical abnormal information processing in migraine. Further studiesare needed to investigate the precise directional connectivity across the entire primary subcortical and cortical somatosensory pathway in interictal MO.
Resumo:
Background: Statin therapy reduces the risk of occlusive vascular events, but uncertainty remains about potential effects on cancer. We sought to provide a detailed assessment of any effects on cancer of lowering LDL cholesterol (LDL-C) with a statin using individual patient records from 175,000 patients in 27 large-scale statin trials. Methods and Findings: Individual records of 134,537 participants in 22 randomised trials of statin versus control (median duration 4.8 years) and 39,612 participants in 5 trials of more intensive versus less intensive statin therapy (median duration 5.1 years) were obtained. Reducing LDL-C with a statin for about 5 years had no effect on newly diagnosed cancer or on death from such cancers in either the trials of statin versus control (cancer incidence: 3755 [1.4% per year [py]] versus 3738 [1.4% py], RR 1.00 [95% CI 0.96-1.05]; cancer mortality: 1365 [0.5% py] versus 1358 [0.5% py], RR 1.00 [95% CI 0.93-1.08]) or in the trials of more versus less statin (cancer incidence: 1466 [1.6% py] vs 1472 [1.6% py], RR 1.00 [95% CI 0.93-1.07]; cancer mortality: 447 [0.5% py] versus 481 [0.5% py], RR 0.93 [95% CI 0.82-1.06]). Moreover, there was no evidence of any effect of reducing LDL-C with statin therapy on cancer incidence or mortality at any of 23 individual categories of sites, with increasing years of treatment, for any individual statin, or in any given subgroup. In particular, among individuals with low baseline LDL-C (<2 mmol/L), there was no evidence that further LDL-C reduction (from about 1.7 to 1.3 mmol/L) increased cancer risk (381 [1.6% py] versus 408 [1.7% py]; RR 0.92 [99% CI 0.76-1.10]). Conclusions: In 27 randomised trials, a median of five years of statin therapy had no effect on the incidence of, or mortality from, any type of cancer (or the aggregate of all cancer).