984 resultados para Extreme bounds analysis
Resumo:
In technical design processes in the automotive industry, digital prototypes rapidly gain importance, because they allow for a detection of design errors in early development stages. The technical design process includes the computation of swept volumes for maintainability analysis and clearance checks. The swept volume is very useful, for example, to identify problem areas where a safety distance might not be kept. With the explicit construction of the swept volume an engineer gets evidence on how the shape of components that come too close have to be modified.rnIn this thesis a concept for the approximation of the outer boundary of a swept volume is developed. For safety reasons, it is essential that the approximation is conservative, i.e., that the swept volume is completely enclosed by the approximation. On the other hand, one wishes to approximate the swept volume as precisely as possible. In this work, we will show, that the one-sided Hausdorff distance is the adequate measure for the error of the approximation, when the intended usage is clearance checks, continuous collision detection and maintainability analysis in CAD. We present two implementations that apply the concept and generate a manifold triangle mesh that approximates the outer boundary of a swept volume. Both algorithms are two-phased: a sweeping phase which generates a conservative voxelization of the swept volume, and the actual mesh generation which is based on restricted Delaunay refinement. This approach ensures a high precision of the approximation while respecting conservativeness.rnThe benchmarks for our test are amongst others real world scenarios that come from the automotive industry.rnFurther, we introduce a method to relate parts of an already computed swept volume boundary to those triangles of the generator, that come closest during the sweep. We use this to verify as well as to colorize meshes resulting from our implementations.
Resumo:
This thesis aims at investigating a new approach to document analysis based on the idea of structural patterns in XML vocabularies. My work is founded on the belief that authors do naturally converge to a reasonable use of markup languages and that extreme, yet valid instances are rare and limited. Actual documents, therefore, may be used to derive classes of elements (patterns) persisting across documents and distilling the conceptualization of the documents and their components, and may give ground for automatic tools and services that rely on no background information (such as schemas) at all. The central part of my work consists in introducing from the ground up a formal theory of eight structural patterns (with three sub-patterns) that are able to express the logical organization of any XML document, and verifying their identifiability in a number of different vocabularies. This model is characterized by and validated against three main dimensions: terseness (i.e. the ability to represent the structure of a document with a small number of objects and composition rules), coverage (i.e. the ability to capture any possible situation in any document) and expressiveness (i.e. the ability to make explicit the semantics of structures, relations and dependencies). An algorithm for the automatic recognition of structural patterns is then presented, together with an evaluation of the results of a test performed on a set of more than 1100 documents from eight very different vocabularies. This language-independent analysis confirms the ability of patterns to capture and summarize the guidelines used by the authors in their everyday practice. Finally, I present some systems that work directly on the pattern-based representation of documents. The ability of these tools to cover very different situations and contexts confirms the effectiveness of the model.
Resumo:
Although the Standard Model of particle physics (SM) provides an extremely successful description of the ordinary matter, one knows from astronomical observations that it accounts only for around 5% of the total energy density of the Universe, whereas around 30% are contributed by the dark matter. Motivated by anomalies in cosmic ray observations and by attempts to solve questions of the SM like the (g-2)_mu discrepancy, proposed U(1) extensions of the SM gauge group have raised attention in recent years. In the considered U(1) extensions a new, light messenger particle, the hidden photon, couples to the hidden sector as well as to the electromagnetic current of the SM by kinetic mixing. This allows for a search for this particle in laboratory experiments exploring the electromagnetic interaction. Various experimental programs have been started to search for hidden photons, such as in electron-scattering experiments, which are a versatile tool to explore various physics phenomena. One approach is the dedicated search in fixed-target experiments at modest energies as performed at MAMI or at JLAB. In these experiments the scattering of an electron beam off a hadronic target e+(A,Z)->e+(A,Z)+l^+l^- is investigated and a search for a very narrow resonance in the invariant mass distribution of the lepton pair is performed. This requires an accurate understanding of the theoretical basis of the underlying processes. For this purpose it is demonstrated in the first part of this work, in which way the hidden photon can be motivated from existing puzzles encountered at the precision frontier of the SM. The main part of this thesis deals with the analysis of the theoretical framework for electron scattering fixed-target experiments searching for hidden photons. As a first step, the cross section for the bremsstrahlung emission of hidden photons in such experiments is studied. Based on these results, the applicability of the Weizsäcker-Williams approximation to calculate the signal cross section of the process, which is widely used to design such experimental setups, is investigated. In a next step, the reaction e+(A,Z)->e+(A,Z)+l^+l^- is analyzed as signal and background process in order to describe existing data obtained by the A1 experiment at MAMI with the aim to give accurate predictions of exclusion limits for the hidden photon parameter space. Finally, the derived methods are used to find predictions for future experiments, e.g., at MESA or at JLAB, allowing for a comprehensive study of the discovery potential of the complementary experiments. In the last part, a feasibility study for probing the hidden photon model by rare kaon decays is performed. For this purpose, invisible as well as visible decays of the hidden photon are considered within different classes of models. This allows one to find bounds for the parameter space from existing data and to estimate the reach of future experiments.
Resumo:
We present an analysis of daily extreme precipitation events for the extended winter season (October–March) at 20 Mediterranean coastal sites covering the period 1950–2006. The heavy tailed behaviour of precipitation extremes and estimated return levels, including associated uncertainties, are derived applying a procedure based on the Generalized Pareto Distribution, in combination with recently developed methods. Precipitation extremes have an important contribution to make seasonal totals (approximately 60% for all series). Three stations (one in the western Mediterranean and the others in the eastern basin) have a 5-year return level above 100 mm, while the lowest value (estimated for two Italian series) is equal to 58 mm. As for the 50-year return level, an Italian station (Genoa) has the highest value of 264 mm, while the other values range from 82 to 200 mm. Furthermore, six series (from stations located in France, Italy, Greece, and Cyprus) show a significant negative tendency in the probability of observing an extreme event. The relationship between extreme precipitation events and the large scale atmospheric circulation at the upper, mid and low troposphere is investigated by using NCEP/NCAR reanalysis data. A 2-step classification procedure identifies three significant anomaly patterns both for the western-central and eastern part of the Mediterranean basin. In the western Mediterranean, the anomalous southwesterly surface to mid-tropospheric flow is connected with enhanced moisture transport from the Atlantic. During ≥5-year return level events, the subtropical jet stream axis is aligned with the African coastline and interacts with the eddy-driven jet stream. This is connected with enhanced large scale ascending motions, instability and leads to the development of severe precipitation events. For the eastern Mediterranean extreme precipitation events, the identified anomaly patterns suggest warm air advection connected with anomalous ascent motions and an increase of the low- to mid-tropospheric moisture. Furthermore, the jet stream position (during ≥5-year return level events) supports the eastern basin being in a divergence area, where ascent motions are favoured. Our results contribute to an improved understanding of daily precipitation extremes in the cold season and associated large scale atmospheric features.
Resumo:
High levels of HIV-1 replication during the chronic phase of infection usually correlate with rapid progression to severe immunodeficiency. However, a minority of highly viremic individuals remains asymptomatic and maintains high CD4⁺ T cell counts. This tolerant profile is poorly understood and reminiscent of the widely studied nonprogressive disease model of SIV infection in natural hosts. Here, we identify transcriptome differences between rapid progressors (RPs) and viremic nonprogressors (VNPs) and highlight several genes relevant for the understanding of HIV-1-induced immunosuppression. RPs were characterized by a specific transcriptome profile of CD4⁺ and CD8⁺ T cells similar to that observed in pathogenic SIV-infected rhesus macaques. In contrast, VNPs exhibited lower expression of interferon-stimulated genes and shared a common gene regulation profile with nonpathogenic SIV-infected sooty mangabeys. A short list of genes associated with VNP, including CASP1, CD38, LAG3, TNFSF13B, SOCS1, and EEF1D, showed significant correlation with time to disease progression when evaluated in an independent set of CD4⁺ T cell expression data. This work characterizes 2 minimally studied clinical patterns of progression to AIDS, whose analysis may inform our understanding of HIV pathogenesis.
Resumo:
As a species of major interest for aquaculture, the sex determination system (SDS) of Nile tilapia, Oreochromis niloticus, has been widely investigated. In this species, sex determination is considered to be governed by the interactions between a complex system of genetic sex determination factors (GSD) and the influence of temperature (TSD) during a critical period. Previous studies were exclusively carried out on domestic stocks with the genetic and maintenance limitations associated. Given the wide distribution and adaptation potential of the Nile tilapia, we investigated under controlled conditions the sex determination system of natural populations adapted to three extreme thermal regimes: stable extreme environments in Ethiopia, either cold temperatures in a highland lake (Lake Koka), or warm temperatures in hydrothermal springs (Lake Metahara), and an environment with large seasonal variations in Ghana (Kpandu, Lake Volta). The sex ratio analysis was conducted on progenies reared under constant basal (27 degrees C) or high (36 degrees C) temperatures during the 30 days following yolk-sac resorption. Sex ratios of the progenies reared at standard temperature suggest that the three populations share a similar complex GSD system based on a predominant male heterogametic factor with additional influences of polymorphism at this locus and/or action of minor factors. The three populations presented a clear thermosensitivity of sex differentiation, with large variations in the intensity of response depending on the parents. This confirms the presence of genotype-environment interactions in TSD of Nile tilapia. Furthermore the existence of naturally sex-reversed individuals is strongly suggested in two populations (Kpandu and Koka). However, it was not possible here to infer if the sex-inversion resulted from minor genetic factors and/or environmental influences. The present study demonstrated for the first time the conservation of a complex SDS combining polymorphic GSD and TSD components in natural populations of Nile tilapia. We discuss the evolutionary implications of our findings and highlight the importance of field investigations of sex determination. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Small clusters of gallium oxide, technologically important high temperature ceramic, together with interaction of nucleic acid bases with graphene and small-diameter carbon nanotube are focus of first principles calculations in this work. A high performance parallel computing platform is also developed to perform these calculations at Michigan Tech. First principles calculations are based on density functional theory employing either local density or gradient-corrected approximation together with plane wave and gaussian basis sets. The bulk Ga2O3 is known to be a very good candidate for fabricating electronic devices that operate at high temperatures. To explore the properties of Ga2O3 at nonoscale, we have performed a systematic theoretical study on the small polyatomic gallium oxide clusters. The calculated results find that all lowest energy isomers of GamOn clusters are dominated by the Ga-O bonds over the metal-metal or the oxygen-oxygen bonds. Analysis of atomic charges suggest the clusters to be highly ionic similar to the case of bulk Ga2O3. In the study of sequential oxidation of these slusters starting from Ga2O, it is found that the most stable isomers display up to four different backbones of constituent atoms. Furthermore, the predicted configuration of the ground state of Ga2O is recently confirmed by the experimental result of Neumark's group. Guided by the results of calculations the study of gallium oxide clusters, performance related challenge of computational simulations, of producing high performance computers/platforms, has been addressed. Several engineering aspects were thoroughly studied during the design, development and implementation of the high performance parallel computing platform, rama, at Michigan Tech. In an attempt to stay true to the principles of Beowulf revolutioni, the rama cluster was extensively customized to make it easy to understand, and use - for administrators as well as end-users. Following the results of benchmark calculations and to keep up with the complexity of systems under study, rama has been expanded to a total of sixty four processors. Interest in the non-covalent intereaction of DNA with carbon nanotubes has steadily increased during past several years. This hybrid system, at the junction of the biological regime and the nanomaterials world, possesses features which make it very attractive for a wide range of applicatioins. Using the in-house computational power available, we have studied details of the interaction between nucleic acid bases with graphene sheet as well as high-curvature small-diameter carbon nanotube. The calculated trend in the binding energies strongly suggests that the polarizability of the base molecules determines the interaction strength of the nucleic acid bases with graphene. When comparing the results obtained here for physisorption on the small diameter nanotube considered with those from the study on graphene, it is observed that the interaction strength of nucleic acid bases is smaller for the tube. Thus, these results show that the effect of introducing curvature is to reduce the binding energy. The binding energies for the two extreme cases of negligible curvature (i.e. flat graphene sheet) and of very high curvature (i.e. small diameter nanotube) may be considered as upper and lower bounds. This finding represents an important step towards a better understanding of experimentally observed sequence-dependent interaction of DNA with Carbon nanotubes.
Resumo:
In-cylinder pressure transducers have been used for decades to record combustion pressure inside a running engine. However, due to the extreme operating environment, transducer design and installation must be considered in order to minimize measurement error. One such error is caused by thermal shock, where the pressure transducer experiences a high heat flux that can distort the pressure transducer diaphragm and also change the crystal sensitivity. This research focused on investigating the effects of thermal shock on in-cylinder pressure transducer data quality using a 2.0L, four-cylinder, spark-ignited, direct-injected, turbo-charged GM engine. Cylinder four was modified with five ports to accommodate pressure transducers of different manufacturers. They included an AVL GH14D, an AVL GH15D, a Kistler 6125C, and a Kistler 6054AR. The GH14D, GH15D, and 6054AR were M5 size transducers. The 6125C was a larger, 6.2mm transducer. Note that both of the AVL pressure transducers utilized a PH03 flame arrestor. Sweeps of ignition timing (spark sweep), engine speed, and engine load were performed to study the effects of thermal shock on each pressure transducer. The project consisted of two distinct phases which included experimental engine testing as well as simulation using a commercially available software package. A comparison was performed to characterize the quality of the data between the actual cylinder pressure and the simulated results. This comparison was valuable because the simulation results did not include thermal shock effects. All three sets of tests showed the peak cylinder pressure was basically unaffected by thermal shock. Comparison of the experimental data with the simulated results showed very good correlation. The spark sweep was performed at 1300 RPM and 3.3 bar NMEP and showed that the differences between the simulated results (no thermal shock) and the experimental data for the indicated mean effective pressure (IMEP) and the pumping mean effective pressure (PMEP) were significantly less than the published accuracies. All transducers had an IMEP percent difference less than 0.038% and less than 0.32% for PMEP. Kistler and AVL publish that the accuracy of their pressure transducers are within plus or minus 1% for the IMEP (AVL 2011; Kistler 2011). In addition, the difference in average exhaust absolute pressure between the simulated results and experimental data was the greatest for the two Kistler pressure transducers. The location and lack of flame arrestor are believed to be the cause of the increased error. For the engine speed sweep, the torque output was held constant at 203 Nm (150 ft-lbf) from 1500 to 4000 RPM. The difference in IMEP was less than 0.01% and the PMEP was less than 1%, except for the AVL GH14D which was 5% and the AVL GH15DK which was 2.25%. A noticeable error in PMEP appeared as the load increased during the engine speed sweeps, as expected. The load sweep was conducted at 2000 RPM over a range of NMEP from 1.1 to 14 bar. The difference in IMEP values were less 0.08% while the PMEP values were below 1% except for the AVL GH14D which was 1.8% and the AVL GH15DK which was at 1.25%. In-cylinder pressure transducer data quality was effectively analyzed using a combination of experimental data and simulation results. Several criteria can be used to investigate the impact of thermal shock on data quality as well as determine the best location and thermal protection for various transducers.
Resumo:
The Twentieth Century Reanalysis (20CR) is an atmospheric dataset consisting of 56 ensemble members, which covers the entire globe and reaches back to 1871. To assess the suitability of this dataset for studying past extremes, we analysed a prominent extreme event, namely the Galveston Hurricane, which made landfall in September 1900 in Texas, USA. The ensemble mean of 20CR shows a track of the pressure minimum with a small standard deviation among the 56 ensemble members in the area of the Gulf of Mexico. However, there are systematic differences between the assimilated “Best Track” from the International Best Track Archive for Climate Stewardship (IBTrACS) and the ensemble mean track in 20CR. East of the Strait of Florida, the tracks derived from 20CR are located systematically northeast of the assimilated track while in the Gulf of Mexico, the 20CR tracks are systematically shifted to the southwest compared to the IBTrACS position. The hurricane can also be observed in the wind field, which shows a cyclonic rotation and a relatively calm zone in the centre of the hurricane. The 20CR data reproduce the pressure gradient and cyclonic wind field. Regarding the amplitude of the wind speeds, the ensemble mean values from 20CR are significantly lower than the wind speeds known from measurements.
Resumo:
The meteorological circumstances that led to the Blizzard of March 1888 that hit New York are analysed in Version 2 of the “Twentieth Century Reanalysis” (20CR). The potential of this data set for studying historical extreme events has not yet been fully explored. A detailed analysis of 20CR data alongside other data sources (including historical instrumental data and weather maps) for historical extremes such as the March 1888 blizzard may give insights into the limitations of 20CR. We find that 20CR reproduces the circulation pattern as well as the temperature development very well. Regarding the absolute values of variables such as snow fall or minimum and maximum surface pressure, there is anunderestimation of the observed extremes, which may be due to the low spatial resolution of 20CR and the fact that only the ensemble mean is considered. Despite this drawback, the dataset allows us to gain new information due to its complete spatial and temporal coverage.
Resumo:
We obtain eigenvalue enclosures and basisness results for eigen- and associated functions of a non-self-adjoint unbounded linear operator pencil A−λBA−λB in which BB is uniformly positive and the essential spectrum of the pencil is empty. Both Riesz basisness and Bari basisness results are obtained. The results are applied to a system of singular differential equations arising in the study of Hagen–Poiseuille flow with non-axisymmetric disturbances.
Resumo:
BACKGROUND: Decisions regarding whether to administer intensive care to extremely premature infants are often based on gestational age alone. However, other factors also affect the prognosis for these patients. METHODS: We prospectively studied a cohort of 4446 infants born at 22 to 25 weeks' gestation (determined on the basis of the best obstetrical estimate) in the Neonatal Research Network of the National Institute of Child Health and Human Development to relate risk factors assessable at or before birth to the likelihood of survival, survival without profound neurodevelopmental impairment, and survival without neurodevelopmental impairment at a corrected age of 18 to 22 months. RESULTS: Among study infants, 3702 (83%) received intensive care in the form of mechanical ventilation. Among the 4192 study infants (94%) for whom outcomes were determined at 18 to 22 months, 49% died, 61% died or had profound impairment, and 73% died or had impairment. In multivariable analyses of infants who received intensive care, exposure to antenatal corticosteroids, female sex, singleton birth, and higher birth weight (per each 100-g increment) were each associated with reductions in the risk of death and the risk of death or profound or any neurodevelopmental impairment; these reductions were similar to those associated with a 1-week increase in gestational age. At the same estimated likelihood of a favorable outcome, girls were less likely than boys to receive intensive care. The outcomes for infants who underwent ventilation were better predicted with the use of the above factors than with use of gestational age alone. CONCLUSIONS: The likelihood of a favorable outcome with intensive care can be better estimated by consideration of four factors in addition to gestational age: sex, exposure or nonexposure to antenatal corticosteroids, whether single or multiple birth, and birth weight. (ClinicalTrials.gov numbers, NCT00063063 [ClinicalTrials.gov] and NCT00009633 [ClinicalTrials.gov].).
Resumo:
Tumor necrosis factor (TNF) is known to have antiproliferative effects on a wide variety of tumor cells but proliferative effects on normal cells. However, the molecular basis for such differences in the action of TNF are unknown. The overall objectives of my research are to investigate the role of oncogenes in TNF sensitivity and delineate some of the molecular mechanisms involved in TNF sensitivity and resistance. To accomplish these objectives, I transfected TNF-resistant C3H mouse embryo fibroblasts (10T1/2) with an activated Ha-ras oncogene and determined whether these cells exhibit altered sensitivity to TNF. The results indicated that 10T1/2 cells transfected with an activated Ha-ras oncogene (10T-EJ) not only produced tumors in nude mice but also exhibited extreme sensitivity to cytolysis by TNF. In contrast, 10T1/2 cells transfected with the pSV2-neo gene alone were resistant to the cytotoxic effects of TNF. I also found that TNF-induced cell death was mediated through apoptosis. The differential sensitivity of 10T1/2 and 10T-EJ cell lines to TNF was not due to differences in the number of TNF receptors on their cell surface. In addition, TNF-resistant revertants isolated from Ha-ras-transformed, TNF-sensitive cells still expressed the same amount of p21 as TNF-sensitive cells and were still tumorigenic, suggesting that Ha-ras-induced transformation and TNF sensitivity may follow different pathways. Interestingly, TNF-resistant but not sensitive cells expressed higher levels of bcl-2, c-myc, and manganese superoxide dismutase (MnSOD) mRNA following exposure to TNF. However, TNF treatment resulted in a marginal induction of p53 mRNA in both TNF-sensitive and resistant cells. Based on these results I can conclude that (i) Ha-ras oncogene induces both transformation and TNF sensitivity, (ii) TNF-induced cytotoxicity involves apoptosis, and (iii) TNF-induced upregulation of bcl-2, c-myc, and MnSOD genes is associated with TNF resistance in C3H mouse embryo fibroblasts. ^
Resumo:
The article offers a systematic analysis of the comparative trajectory of international democratic change. In particular, it focuses on the resulting convergence or divergence of political systems, borrowing from the literatures on institutional change and policy convergence. To this end, political-institutional data in line with Arend Lijphart’s (1999, 2012) empirical theory of democracy for 24 developed democracies between 1945 and 2010 are analyzed. Heteroscedastic multilevel models allow for directly modeling the development of the variance of types of democracy over time, revealing information about convergence, and adding substantial explanations. The findings indicate that there has been a trend away from extreme types of democracy in single cases, but no unconditional trend of convergence can be observed. However, there are conditional processes of convergence. In particular, economic globalization and the domestic veto structure interactively influence democratic convergence.
Resumo:
During winter 2013, extremely high concentrations (i.e., 4–20 times higher than the World Health Organization guideline) of PM2.5 (particulate matter with an aerodynamic diameter < 2.5 μm) mass concentrations (24 h samples) were found in four major cities in China including Xi'an, Beijing, Shanghai and Guangzhou. Statistical analysis of a combined data set from elemental carbon (EC), organic carbon (OC), 14C and biomass-burning marker measurements using Latin hypercube sampling allowed a quantitative source apportionment of carbonaceous aerosols. Based on 14C measurements of EC fractions (six samples each city), we found that fossil emissions from coal combustion and vehicle exhaust dominated EC with a mean contribution of 75 ± 8% across all sites. The remaining 25 ± 8% was exclusively attributed to biomass combustion, consistent with the measurements of biomass-burning markers such as anhydrosugars (levoglucosan and mannosan) and water-soluble potassium (K+). With a combination of the levoglucosan-to-mannosan and levoglucosan-to-K+ ratios, the major source of biomass burning in winter in China is suggested to be combustion of crop residues. The contribution of fossil sources to OC was highest in Beijing (58 ± 5%) and decreased from Shanghai (49 ± 2%) to Xi'an (38 ± 3%) and Guangzhou (35 ± 7%). Generally, a larger fraction of fossil OC was from secondary origins than primary sources for all sites. Non-fossil sources accounted on average for 55 ± 10 and 48 ± 9% of OC and total carbon (TC), respectively, which suggests that non-fossil emissions were very important contributors of urban carbonaceous aerosols in China. The primary biomass-burning emissions accounted for 40 ± 8, 48 ± 18, 53 ± 4 and 65 ± 26% of non-fossil OC for Xi'an, Beijing, Shanghai and Guangzhou, respectively. Other non-fossil sources excluding primary biomass burning were mainly attributed to formation of secondary organic carbon (SOC) from non-fossil precursors such as biomass-burning emissions. For each site, we also compared samples from moderately to heavily polluted days according to particulate matter mass. Despite a significant increase of the absolute mass concentrations of primary emissions from both fossil and non-fossil sources during the heavily polluted events, their relative contribution to TC was even decreased, whereas the portion of SOC was consistently increased at all sites. This observation indicates that SOC was an important fraction in the increment of carbonaceous aerosols during the haze episode in China.