950 resultados para Very-High-Cycle Fatigue
Resumo:
Quinoa (Chenopodium quinoa) is a seed crop native to the Andes, that can be used in a variety of food product in a similar manner to cereals. Unlike most plants, quinoa contains protein with a balanced amino acid profile. This makes it an interesting raw material for e.g. dairy product substitutes, a growing market in Europe and U.S. Quinoa can however have unpleasant off-flavours when processed into formulated products. One means of improving the palatability is seed germination. Also, the increased activities of hydrolytic enzymes can have a beneficial influence in food processing. In this thesis, the germination pattern of quinoa was studied, and the influence of quinoa malt was evaluated in a model product. Additionally, to explore its potential for dairy-type products, quinoa protein was isolated from an embryo-enriched milling fraction of non-germinated quinoa and tested for functional and gelation properties. Quinoa seeds imbibed water very rapidly, and most seeds showed radicle protrusion after 8-9 h. The α-amylase activity was very low, and started to increase only after 24 hours of germination in the starchy perisperm. Proteolytic activity was very high in dry ungerminated seeds, and increased slightly over 24 h. A significant fraction of this activity was located in the micropylar endosperm. The incorporation of germinated quinoa in gluten-free bread had no significant effect on the baking properties due to low α-amylase activity. Upon acidification with glucono-δ-lactone, quinoa milk formed a structured gel. The gelation behaviour was further studied using a quinoa protein isolate (QPI) extracted from an embryoenriched milling fraction. QPI required a heat-denaturation step to form gel structures. The heating pH influenced the properties drastically: heating at pH 10.5 led to a dramatic increase in solubility, emulsifying properties, and a formation of a fine-structured gel with a high storage modulus (G') when acidified. Heating at pH 8.5 varied very little from the unheated protein in terms of functional properties, and only formed a randomly aggregated coagulum with a low G'. Further study of changes over the course of heating showed that the mechanism of heat-denaturation and aggregation indeed varied largely depending on pH. The large difference in gelation behaviour may be related to the nature of aggregates formed during heating. To conclude, germination for increased enzyme activities may not be feasible, but the structure-forming properties of quinoa protein could possibly be exploited in dairy-type products.
Resumo:
In order to widely use Ge and III-V materials instead of Si in advanced CMOS technology, the process and integration of these materials has to be well established so that their high mobility benefit is not swamped by imperfect manufacturing procedures. In this dissertation number of key bottlenecks in realization of Ge devices are investigated; We address the challenge of the formation of low resistivity contacts on n-type Ge, comparing conventional and advanced rapid thermal annealing (RTA) and laser thermal annealing (LTA) techniques respectively. LTA appears to be a feasible approach for realization of low resistivity contacts with an incredibly sharp germanide-substrate interface and contact resistivity in the order of 10 -7 Ω.cm2. Furthermore the influence of RTA and LTA on dopant activation and leakage current suppression in n+/p Ge junction were compared. Providing very high active carrier concentration > 1020 cm-3, LTA resulted in higher leakage current compared to RTA which provided lower carrier concentration ~1019 cm-3. This is an indication of a trade-off between high activation level and junction leakage current. High ION/IOFF ratio ~ 107 was obtained, which to the best of our knowledge is the best reported value for n-type Ge so far. Simulations were carried out to investigate how target sputtering, dose retention, and damage formation is generated in thin-body semiconductors by means of energetic ion impacts and how they are dependent on the target physical material properties. Solid phase epitaxy studies in wide and thin Ge fins confirmed the formation of twin boundary defects and random nucleation growth, like in Si, but here 600 °C annealing temperature was found to be effective to reduce these defects. Finally, a non-destructive doping technique was successfully implemented to dope Ge nanowires, where nanowire resistivity was reduced by 5 orders of magnitude using PH3 based in-diffusion process.
Resumo:
New burned area datasets and top-down constraints from atmospheric concentration measurements of pyrogenic gases have decreased the large uncertainty in fire emissions estimates. However, significant gaps remain in our understanding of the contribution of deforestation, savanna, forest, agricultural waste, and peat fires to total global fire emissions. Here we used a revised version of the Carnegie-Ames-Stanford-Approach (CASA) biogeochemical model and improved satellite-derived estimates of area burned, fire activity, and plant productivity to calculate fire emissions for the 1997-2009 period on a 0.5° spatial resolution with a monthly time step. For November 2000 onwards, estimates were based on burned area, active fire detections, and plant productivity from the MODerate resolution Imaging Spectroradiometer (MODIS) sensor. For the partitioning we focused on the MODIS era. We used maps of burned area derived from the Tropical Rainfall Measuring Mission (TRMM) Visible and Infrared Scanner (VIRS) and Along-Track Scanning Radiometer (ATSR) active fire data prior to MODIS (1997-2000) and estimates of plant productivity derived from Advanced Very High Resolution Radiometer (AVHRR) observations during the same period. Average global fire carbon emissions according to this version 3 of the Global Fire Emissions Database (GFED3) were 2.0 PgC year-1 with significant interannual variability during 1997-2001 (2.8 Pg Cyear-1 in 1998 and 1.6 PgC year-1 in 2001). Globally, emissions during 2002-2007 were rela-tively constant (around 2.1 Pg C year-1) before declining in 2008 (1.7 Pg Cyear-1) and 2009 (1.5 PgC year-1) partly due to lower deforestation fire emissions in South America and tropical Asia. On a regional basis, emissions were highly variable during 2002-2007 (e.g., boreal Asia, South America, and Indonesia), but these regional differences canceled out at a global level. During the MODIS era (2001-2009), most carbon emissions were from fires in grasslands and savannas (44%) with smaller contributions from tropical deforestation and degradation fires (20%), woodland fires (mostly confined to the tropics, 16%), forest fires (mostly in the extratropics, 15%), agricultural waste burning (3%), and tropical peat fires (3%). The contribution from agricultural waste fires was likely a lower bound because our approach for measuring burned area could not detect all of these relatively small fires. Total carbon emissions were on average 13% lower than in our previous (GFED2) work. For reduced trace gases such as CO and CH4, deforestation, degradation, and peat fires were more important contributors because of higher emissions of reduced trace gases per unit carbon combusted compared to savanna fires. Carbon emissions from tropical deforestation, degradation, and peatland fires were on average 0.5 PgC year-1. The carbon emissions from these fires may not be balanced by regrowth following fire. Our results provide the first global assessment of the contribution of different sources to total global fire emissions for the past decade, and supply the community with an improved 13-year fire emissions time series. © 2010 Author(s).
Resumo:
This dissertation project identifies important works for solo saxophone by United States composers between 1975 and 2005. The quality, variety, expressiveness, and difficulty of the solo saxophone repertoire during these thirty years is remarkable and remedies, to some extent, the fact that the saxophone had been a largely neglected instrument in the realm of classical music. In twentieth-century music, including Jazz, the saxophone developed, nevertheless, a unique and significant voice as is evident in the saxophone repertoire that expands immensely in many instrumental settings, including the orchestra, solo works, and a wide variety of chamber ensembles. Historically, the saxophone in the United States first found its niche in Vaudeville, military bands, and jazz ensembles, while in Europe composers such as Debussy, D'Indy, Schmitt, Ibert, Glazounov, Heiden, and Desenclos recognized the potential of the instrument and wrote for it. The saxophone is well suited to the intimacy and unique timbral explorations of the solo literature, but only by the middle twentieth century did the repertoire allow the instrument to flourish into a virtuosic and expressive voice presented by successive generations of performers – Marcel Mule, Sigurd Rascher, Cecil Leeson, Jean-Marie Londeix, Fred Hemke, Eugene Rousseau, and Donald Sinta. The very high artistic level of theses soloists was inspiring and dozens of new compositions were commissioned. Through the 1960’s American composers such as Paul Creston, Leslie Bassett, Henry Cowell, Alec Wilder, and others produced eminent works for the saxophone, to be followed by an enormous output of quality compositions between 1975 and 2005. The works chosen for performance were selected from thousands of compositions between 1975 and 2005 researched for this project. The three recital dates were: April 6, 2005, in Gildenhorn Recital Hall, December 4, 2005, in Ulrich Recital Hall, and April 15, 2006, in Gildenhorn Recital Hall. Recordings of these recitals may be obtained in person or online from the Michelle Smith Performing Arts Library of the University of Maryland, College Park.
Resumo:
BACKGROUND: Controversies exist regarding the indications for unicompartmental knee arthroplasty. The objective of this study is to report the mid-term results and examine predictors of failure in a metal-backed unicompartmental knee arthroplasty design. METHODS: At a mean follow-up of 60 months, 80 medial unicompartmental knee arthroplasties (68 patients) were evaluated. Implant survivorship was analyzed using Kaplan-Meier method. The Knee Society objective and functional scores and radiographic characteristics were compared before surgery and at final follow-up. A Cox proportional hazard model was used to examine the association of patient's age, gender, obesity (body mass index > 30 kg/m2), diagnosis, Knee Society scores and patella arthrosis with failure. RESULTS: There were 9 failures during the follow up. The mean Knee Society objective and functional scores were respectively 49 and 48 points preoperatively and 95 and 92 points postoperatively. The survival rate was 92% at 5 years and 84% at 10 years. The mean age was younger in the failure group than the non-failure group (p < 0.01). However, none of the factors assessed was independently associated with failure based on the results from the Cox proportional hazard model. CONCLUSION: Gender, pre-operative diagnosis, preoperative objective and functional scores and patellar osteophytes were not independent predictors of failure of unicompartmental knee implants, although high body mass index trended toward significance. The findings suggest that the standard criteria for UKA may be expanded without compromising the outcomes, although caution may be warranted in patients with very high body mass index pending additional data to confirm our results. LEVEL OF EVIDENCE: IV.
Resumo:
UNLABELLED: In a follow-up to the modest efficacy observed in the RV144 trial, researchers in the HIV vaccine field seek to substantiate and extend the results by evaluating other poxvirus vectors and combinations with DNA and protein vaccines. Earlier clinical trials (EuroVacc trials 01 to 03) evaluated the immunogenicity of HIV-1 clade C GagPolNef and gp120 antigens delivered via the poxviral vector NYVAC. These showed that a vaccination regimen including DNA-C priming prior to a NYVAC-C boost considerably enhanced vaccine-elicited immune responses compared to those with NYVAC-C alone. Moreover, responses were improved by using three as opposed to two DNA-C primes. In the present study, we assessed in nonhuman primates whether such vaccination regimens can be streamlined further by using fewer and accelerated immunizations and employing a novel generation of improved DNA-C and NYVAC-C vaccine candidates designed for higher expression levels and more balanced immune responses. Three different DNA-C prime/NYVAC-C+ protein boost vaccination regimens were tested in rhesus macaques. All regimens elicited vigorous and well-balanced CD8(+)and CD4(+)T cell responses that were broad and polyfunctional. Very high IgG binding titers, substantial antibody-dependent cellular cytotoxicity (ADCC), and modest antibody-dependent cell-mediated virus inhibition (ADCVI), but very low neutralization activity, were measured after the final immunizations. Overall, immune responses elicited in all three groups were very similar and of greater magnitude, breadth, and quality than those of earlier EuroVacc vaccines. In conclusion, these findings indicate that vaccination schemes can be simplified by using improved antigens and regimens. This may offer a more practical and affordable means to elicit potentially protective immune responses upon vaccination, especially in resource-constrained settings. IMPORTANCE: Within the EuroVacc clinical trials, we previously assessed the immunogenicity of HIV clade C antigens delivered in a DNA prime/NYVAC boost regimen. The trials showed that the DNA prime crucially improved the responses, and three DNA primes with a NYVAC boost appeared to be optimal. Nevertheless, T cell responses were primarily directed toward Env, and humoral responses were modest. The aim of this study was to assess improved antigens for the capacity to elicit more potent and balanced responses in rhesus macaques, even with various simpler immunization regimens. Our results showed that the novel antigens in fact elicited larger numbers of T cells with a polyfunctional profile and a good Env-GagPolNef balance, as well as high-titer and Fc-functional antibody responses. Finally, comparison of the different schedules indicates that a simpler regimen of only two DNA primes and one NYVAC boost in combination with protein may be very efficient, thus showing that the novel antigens allow for easier immunization protocols.
Resumo:
This paper represents the first research attempt to estimate the probabilities for Vietnamese patients to fall into destitution facing financial burdens occurring during their curative stay in hospital. The study models the risk against such factors as level of insurance coverage, location of patient, costliness of treatment, among others. The results show that very high probabilities of destitution, approximately 70%, apply to a large group of patients, who are nonresident, poor and ineligible for significant insurance coverage. There is also a probability of 58% that low-income patients who are seriously ill and face higher health care costs would quit their treatment. These facts will put Vietnamese government’s ambitious plan of increasing both universal coverage (UC) to 100% of expenditure and rate of UC beneficiaries to 100% at a serious test. The study also raises issues of asymmetric information and alternative financing options for the poor, who are most exposed to risk of destitution, following market-based health care reforms.
Resumo:
p.51-55
Resumo:
It is now possible to use powerful general purpose computer architectures to support post-production of both video and multimedia projects. By devising a suitable portable software architecture and using high-speed networking in an appropriate manner, a system has been constructed where editors are no longer tied to a specific location. New types of production, such as multi-threaded interactive video, are supported. Editors may also work remotely where very high speed network connection is not currently provided. An object-oriented database is used for the comprehensive cataloging of material and to support automatic audio/video object migration and replication. Copyright © 1997 by the Society of Motion Picture and Television Engineers, Inc.
Resumo:
Three parallel optimisation algorithms, for use in the context of multilevel graph partitioning of unstructured meshes, are described. The first, interface optimisation, reduces the computation to a set of independent optimisation problems in interface regions. The next, alternating optimisation, is a restriction of this technique in which mesh entities are only allowed to migrate between subdomains in one direction. The third treats the gain as a potential field and uses the concept of relative gain for selecting appropriate vertices to migrate. The results are compared and seen to produce very high global quality partitions, very rapidly. The results are also compared with another partitioning tool and shown to be of higher quality although taking longer to compute.
Resumo:
The problem of deriving parallel mesh partitioning algorithms for mapping unstructured meshes to parallel computers is discussed in this chapter. In itself this raises a paradox - we seek to find a high quality partition of the mesh, but to compute it in parallel we require a partition of the mesh. In fact, we overcome this difficulty by deriving an optimisation strategy which can find a high quality partition even if the quality of the initial partition is very poor and then use a crude distribution scheme for the initial partition. The basis of this strategy is to use a multilevel approach combined with local refinement algorithms. Three such refinement algorithms are outlined and some example results presented which show that they can produce very high global quality partitions, very rapidly. The results are also compared with a similar multilevel serial partitioner and shown to be almost identical in quality. Finally we consider the impact of the initial partition on the results and demonstrate that the final partition quality is, modulo a certain amount of noise, independent of the initial partition.
Resumo:
The cold crucible, or induction skull melting process as is otherwise known, has the potential to produce high purity melts of a range of difficult to melt materials, including Ti–Al and Ti6Al4V alloys for Aerospace, Ti–Ta and other biocompatible materials for surgical implants, silicon for photovoltaic and electronic applications, etc. A water cooled AC coil surrounds the crucible causing induction currents to melt the alloy and partially suspend it against gravity away from water-cooled surfaces. Strong stirring takes place in the melt due to the induced electromagnetic Lorentz forces and very high temperatures are attainable under the right conditions (i.e., provided contact with water cooled walls is minimised). In a joint numerical and experimental research programme, various aspects of the design and operation of this process are investigated to increase our understanding of the physical mechanisms involved and to maximise process efficiency. A combination of FV and Spectral CFD techniques are used at Greenwich to tackle this problem numerically, with the experimental work taking place at Birmingham University. Results of this study, presented here, highlight the influence of turbulence and free surface behaviour on attained superheat and also discuss coil design variations and dual frequency options that may lead to winning crucible designs.
Resumo:
The cold crucible, or induction skull melting process as is otherwise known, has the potential to produce high purity melts of a range of difficult to melt materials, including Ti–Al and Ti6Al4V alloys for Aerospace, Ti–Ta and other biocompatible materials for surgical implants, silicon for photovoltaic and electronic applications, etc. A water cooled AC coil surrounds the crucible causing induction currents to melt the alloy and partially suspend it against gravity away from water-cooled surfaces. Strong stirring takes place in the melt due to the induced electromagnetic Lorentz forces and very high temperatures are attainable under the right conditions (i.e., provided contact with water cooled walls is minimised). In a joint numerical and experimental research programme, various aspects of the design and operation of this process are investigated to increase our understanding of the physical mechanisms involved and to maximise process efficiency. A combination of FV and Spectral CFD techniques are used at Greenwich to tackle this problem numerically, with the experimental work taking place at Birmingham University. Results of this study, presented here, highlight the influence of turbulence and free surface behaviour on attained superheat and also discuss coil design variations and dual frequency options that may lead to winning crucible designs.
Resumo:
A toxicity model on dividing the computational domain into two parts, a control region (CR) and a transport region (TR), for species calculation was recently developed. The model can be incorporated with either the heat source approach or the eddy dissipation model (EDM). The work described in this paper is a further application of the toxicity model with modifications of the EDM for vitiated fires. In the modified EDM, chemical reaction only occurs within the CR. This is consistent with the approach used in the species concentration calculations within the toxicity model in which yields of combustion products only change within the CR. A vitiated large room-corridor fire, in which the carbon monoxide (CM) concentrations are very high and the temperatures are relatively low at locations distant from the original fire source, is simulated using the modified EDM coupled with the toxicity model. Compared with the EDM, the modified EDM provide significant improvements in the predictions of temperatures at remote locations. Predictions of species concentrations at various locations follow the measured trends. Good agreements between the measured and predicted species concentrations are obtained at the vitiated fire stage.
Resumo:
In previous publications [1,2], it was rationalized that a large vertical potshell deformation may have a negative impact on the operations of very high amperage cells. The MHD-Valdis non-linear Magneto-Hydro-Dynamic model was therefore extended to take into account the displacement of the potshell. The MHD cell stability behavior of a 500 kA cell with a 17.3 meters long potshell was then studied.